Jan 10 11:23:51 np0005580781 kernel: Linux version 5.14.0-655.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Mon Dec 29 08:24:22 UTC 2025
Jan 10 11:23:51 np0005580781 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 10 11:23:51 np0005580781 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 10 11:23:51 np0005580781 kernel: BIOS-provided physical RAM map:
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 10 11:23:51 np0005580781 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 10 11:23:51 np0005580781 kernel: NX (Execute Disable) protection: active
Jan 10 11:23:51 np0005580781 kernel: APIC: Static calls initialized
Jan 10 11:23:51 np0005580781 kernel: SMBIOS 2.8 present.
Jan 10 11:23:51 np0005580781 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 10 11:23:51 np0005580781 kernel: Hypervisor detected: KVM
Jan 10 11:23:51 np0005580781 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 10 11:23:51 np0005580781 kernel: kvm-clock: using sched offset of 3268518381 cycles
Jan 10 11:23:51 np0005580781 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 10 11:23:51 np0005580781 kernel: tsc: Detected 2800.000 MHz processor
Jan 10 11:23:51 np0005580781 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 10 11:23:51 np0005580781 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 10 11:23:51 np0005580781 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 10 11:23:51 np0005580781 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 10 11:23:51 np0005580781 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 10 11:23:51 np0005580781 kernel: Using GB pages for direct mapping
Jan 10 11:23:51 np0005580781 kernel: RAMDISK: [mem 0x2d461000-0x32a28fff]
Jan 10 11:23:51 np0005580781 kernel: ACPI: Early table checksum verification disabled
Jan 10 11:23:51 np0005580781 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 10 11:23:51 np0005580781 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 11:23:51 np0005580781 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 11:23:51 np0005580781 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 11:23:51 np0005580781 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 10 11:23:51 np0005580781 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 11:23:51 np0005580781 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 10 11:23:51 np0005580781 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 10 11:23:51 np0005580781 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 10 11:23:51 np0005580781 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 10 11:23:51 np0005580781 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 10 11:23:51 np0005580781 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 10 11:23:51 np0005580781 kernel: No NUMA configuration found
Jan 10 11:23:51 np0005580781 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 10 11:23:51 np0005580781 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 10 11:23:51 np0005580781 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 10 11:23:51 np0005580781 kernel: Zone ranges:
Jan 10 11:23:51 np0005580781 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 10 11:23:51 np0005580781 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 10 11:23:51 np0005580781 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 10 11:23:51 np0005580781 kernel:  Device   empty
Jan 10 11:23:51 np0005580781 kernel: Movable zone start for each node
Jan 10 11:23:51 np0005580781 kernel: Early memory node ranges
Jan 10 11:23:51 np0005580781 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 10 11:23:51 np0005580781 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 10 11:23:51 np0005580781 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 10 11:23:51 np0005580781 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 10 11:23:51 np0005580781 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 10 11:23:51 np0005580781 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 10 11:23:51 np0005580781 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 10 11:23:51 np0005580781 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 10 11:23:51 np0005580781 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 10 11:23:51 np0005580781 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 10 11:23:51 np0005580781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 10 11:23:51 np0005580781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 10 11:23:51 np0005580781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 10 11:23:51 np0005580781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 10 11:23:51 np0005580781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 10 11:23:51 np0005580781 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 10 11:23:51 np0005580781 kernel: TSC deadline timer available
Jan 10 11:23:51 np0005580781 kernel: CPU topo: Max. logical packages:   8
Jan 10 11:23:51 np0005580781 kernel: CPU topo: Max. logical dies:       8
Jan 10 11:23:51 np0005580781 kernel: CPU topo: Max. dies per package:   1
Jan 10 11:23:51 np0005580781 kernel: CPU topo: Max. threads per core:   1
Jan 10 11:23:51 np0005580781 kernel: CPU topo: Num. cores per package:     1
Jan 10 11:23:51 np0005580781 kernel: CPU topo: Num. threads per package:   1
Jan 10 11:23:51 np0005580781 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 10 11:23:51 np0005580781 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 10 11:23:51 np0005580781 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 10 11:23:51 np0005580781 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 10 11:23:51 np0005580781 kernel: Booting paravirtualized kernel on KVM
Jan 10 11:23:51 np0005580781 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 10 11:23:51 np0005580781 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 10 11:23:51 np0005580781 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 10 11:23:51 np0005580781 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 10 11:23:51 np0005580781 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 10 11:23:51 np0005580781 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64", will be passed to user space.
Jan 10 11:23:51 np0005580781 kernel: random: crng init done
Jan 10 11:23:51 np0005580781 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: Fallback order for Node 0: 0 
Jan 10 11:23:51 np0005580781 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 10 11:23:51 np0005580781 kernel: Policy zone: Normal
Jan 10 11:23:51 np0005580781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 10 11:23:51 np0005580781 kernel: software IO TLB: area num 8.
Jan 10 11:23:51 np0005580781 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 10 11:23:51 np0005580781 kernel: ftrace: allocating 49414 entries in 194 pages
Jan 10 11:23:51 np0005580781 kernel: ftrace: allocated 194 pages with 3 groups
Jan 10 11:23:51 np0005580781 kernel: Dynamic Preempt: voluntary
Jan 10 11:23:51 np0005580781 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 10 11:23:51 np0005580781 kernel: rcu: #011RCU event tracing is enabled.
Jan 10 11:23:51 np0005580781 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 10 11:23:51 np0005580781 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 10 11:23:51 np0005580781 kernel: #011Rude variant of Tasks RCU enabled.
Jan 10 11:23:51 np0005580781 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 10 11:23:51 np0005580781 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 10 11:23:51 np0005580781 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 10 11:23:51 np0005580781 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 10 11:23:51 np0005580781 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 10 11:23:51 np0005580781 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 10 11:23:51 np0005580781 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 10 11:23:51 np0005580781 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 10 11:23:51 np0005580781 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 10 11:23:51 np0005580781 kernel: Console: colour VGA+ 80x25
Jan 10 11:23:51 np0005580781 kernel: printk: console [ttyS0] enabled
Jan 10 11:23:51 np0005580781 kernel: ACPI: Core revision 20230331
Jan 10 11:23:51 np0005580781 kernel: APIC: Switch to symmetric I/O mode setup
Jan 10 11:23:51 np0005580781 kernel: x2apic enabled
Jan 10 11:23:51 np0005580781 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 10 11:23:51 np0005580781 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 10 11:23:51 np0005580781 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 10 11:23:51 np0005580781 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 10 11:23:51 np0005580781 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 10 11:23:51 np0005580781 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 10 11:23:51 np0005580781 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 10 11:23:51 np0005580781 kernel: Spectre V2 : Mitigation: Retpolines
Jan 10 11:23:51 np0005580781 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 10 11:23:51 np0005580781 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 10 11:23:51 np0005580781 kernel: RETBleed: Mitigation: untrained return thunk
Jan 10 11:23:51 np0005580781 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 10 11:23:51 np0005580781 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 10 11:23:51 np0005580781 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 10 11:23:51 np0005580781 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 10 11:23:51 np0005580781 kernel: x86/bugs: return thunk changed
Jan 10 11:23:51 np0005580781 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 10 11:23:51 np0005580781 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 10 11:23:51 np0005580781 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 10 11:23:51 np0005580781 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 10 11:23:51 np0005580781 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 10 11:23:51 np0005580781 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 10 11:23:51 np0005580781 kernel: Freeing SMP alternatives memory: 40K
Jan 10 11:23:51 np0005580781 kernel: pid_max: default: 32768 minimum: 301
Jan 10 11:23:51 np0005580781 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 10 11:23:51 np0005580781 kernel: landlock: Up and running.
Jan 10 11:23:51 np0005580781 kernel: Yama: becoming mindful.
Jan 10 11:23:51 np0005580781 kernel: SELinux:  Initializing.
Jan 10 11:23:51 np0005580781 kernel: LSM support for eBPF active
Jan 10 11:23:51 np0005580781 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 10 11:23:51 np0005580781 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 10 11:23:51 np0005580781 kernel: ... version:                0
Jan 10 11:23:51 np0005580781 kernel: ... bit width:              48
Jan 10 11:23:51 np0005580781 kernel: ... generic registers:      6
Jan 10 11:23:51 np0005580781 kernel: ... value mask:             0000ffffffffffff
Jan 10 11:23:51 np0005580781 kernel: ... max period:             00007fffffffffff
Jan 10 11:23:51 np0005580781 kernel: ... fixed-purpose events:   0
Jan 10 11:23:51 np0005580781 kernel: ... event mask:             000000000000003f
Jan 10 11:23:51 np0005580781 kernel: signal: max sigframe size: 1776
Jan 10 11:23:51 np0005580781 kernel: rcu: Hierarchical SRCU implementation.
Jan 10 11:23:51 np0005580781 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 10 11:23:51 np0005580781 kernel: smp: Bringing up secondary CPUs ...
Jan 10 11:23:51 np0005580781 kernel: smpboot: x86: Booting SMP configuration:
Jan 10 11:23:51 np0005580781 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 10 11:23:51 np0005580781 kernel: smp: Brought up 1 node, 8 CPUs
Jan 10 11:23:51 np0005580781 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 10 11:23:51 np0005580781 kernel: node 0 deferred pages initialised in 8ms
Jan 10 11:23:51 np0005580781 kernel: Memory: 7763860K/8388068K available (16384K kernel code, 5796K rwdata, 13908K rodata, 4196K init, 7200K bss, 618248K reserved, 0K cma-reserved)
Jan 10 11:23:51 np0005580781 kernel: devtmpfs: initialized
Jan 10 11:23:51 np0005580781 kernel: x86/mm: Memory block size: 128MB
Jan 10 11:23:51 np0005580781 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 10 11:23:51 np0005580781 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 10 11:23:51 np0005580781 kernel: pinctrl core: initialized pinctrl subsystem
Jan 10 11:23:51 np0005580781 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 10 11:23:51 np0005580781 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 10 11:23:51 np0005580781 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 10 11:23:51 np0005580781 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 10 11:23:51 np0005580781 kernel: audit: initializing netlink subsys (disabled)
Jan 10 11:23:51 np0005580781 kernel: audit: type=2000 audit(1768062229.520:1): state=initialized audit_enabled=0 res=1
Jan 10 11:23:51 np0005580781 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 10 11:23:51 np0005580781 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 10 11:23:51 np0005580781 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 10 11:23:51 np0005580781 kernel: cpuidle: using governor menu
Jan 10 11:23:51 np0005580781 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 10 11:23:51 np0005580781 kernel: PCI: Using configuration type 1 for base access
Jan 10 11:23:51 np0005580781 kernel: PCI: Using configuration type 1 for extended access
Jan 10 11:23:51 np0005580781 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 10 11:23:51 np0005580781 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 10 11:23:51 np0005580781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 10 11:23:51 np0005580781 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 10 11:23:51 np0005580781 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 10 11:23:51 np0005580781 kernel: Demotion targets for Node 0: null
Jan 10 11:23:51 np0005580781 kernel: cryptd: max_cpu_qlen set to 1000
Jan 10 11:23:51 np0005580781 kernel: ACPI: Added _OSI(Module Device)
Jan 10 11:23:51 np0005580781 kernel: ACPI: Added _OSI(Processor Device)
Jan 10 11:23:51 np0005580781 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 10 11:23:51 np0005580781 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 10 11:23:51 np0005580781 kernel: ACPI: Interpreter enabled
Jan 10 11:23:51 np0005580781 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 10 11:23:51 np0005580781 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 10 11:23:51 np0005580781 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 10 11:23:51 np0005580781 kernel: PCI: Using E820 reservations for host bridge windows
Jan 10 11:23:51 np0005580781 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 10 11:23:51 np0005580781 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 10 11:23:51 np0005580781 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [3] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [4] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [5] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [6] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [7] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [8] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [9] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [10] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [11] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [12] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [13] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [14] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [15] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [16] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [17] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [18] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [19] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [20] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [21] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [22] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [23] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [24] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [25] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [26] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [27] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [28] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [29] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [30] registered
Jan 10 11:23:51 np0005580781 kernel: acpiphp: Slot [31] registered
Jan 10 11:23:51 np0005580781 kernel: PCI host bridge to bus 0000:00
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 10 11:23:51 np0005580781 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 10 11:23:51 np0005580781 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 10 11:23:51 np0005580781 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 10 11:23:51 np0005580781 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 10 11:23:51 np0005580781 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 10 11:23:51 np0005580781 kernel: iommu: Default domain type: Translated
Jan 10 11:23:51 np0005580781 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 10 11:23:51 np0005580781 kernel: SCSI subsystem initialized
Jan 10 11:23:51 np0005580781 kernel: ACPI: bus type USB registered
Jan 10 11:23:51 np0005580781 kernel: usbcore: registered new interface driver usbfs
Jan 10 11:23:51 np0005580781 kernel: usbcore: registered new interface driver hub
Jan 10 11:23:51 np0005580781 kernel: usbcore: registered new device driver usb
Jan 10 11:23:51 np0005580781 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 10 11:23:51 np0005580781 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 10 11:23:51 np0005580781 kernel: PTP clock support registered
Jan 10 11:23:51 np0005580781 kernel: EDAC MC: Ver: 3.0.0
Jan 10 11:23:51 np0005580781 kernel: NetLabel: Initializing
Jan 10 11:23:51 np0005580781 kernel: NetLabel:  domain hash size = 128
Jan 10 11:23:51 np0005580781 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 10 11:23:51 np0005580781 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 10 11:23:51 np0005580781 kernel: PCI: Using ACPI for IRQ routing
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 10 11:23:51 np0005580781 kernel: vgaarb: loaded
Jan 10 11:23:51 np0005580781 kernel: clocksource: Switched to clocksource kvm-clock
Jan 10 11:23:51 np0005580781 kernel: VFS: Disk quotas dquot_6.6.0
Jan 10 11:23:51 np0005580781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 10 11:23:51 np0005580781 kernel: pnp: PnP ACPI init
Jan 10 11:23:51 np0005580781 kernel: pnp: PnP ACPI: found 5 devices
Jan 10 11:23:51 np0005580781 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 10 11:23:51 np0005580781 kernel: NET: Registered PF_INET protocol family
Jan 10 11:23:51 np0005580781 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 10 11:23:51 np0005580781 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 10 11:23:51 np0005580781 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 10 11:23:51 np0005580781 kernel: NET: Registered PF_XDP protocol family
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 10 11:23:51 np0005580781 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 10 11:23:51 np0005580781 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 10 11:23:51 np0005580781 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 87131 usecs
Jan 10 11:23:51 np0005580781 kernel: PCI: CLS 0 bytes, default 64
Jan 10 11:23:51 np0005580781 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 10 11:23:51 np0005580781 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 10 11:23:51 np0005580781 kernel: Trying to unpack rootfs image as initramfs...
Jan 10 11:23:51 np0005580781 kernel: ACPI: bus type thunderbolt registered
Jan 10 11:23:51 np0005580781 kernel: Initialise system trusted keyrings
Jan 10 11:23:51 np0005580781 kernel: Key type blacklist registered
Jan 10 11:23:51 np0005580781 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 10 11:23:51 np0005580781 kernel: zbud: loaded
Jan 10 11:23:51 np0005580781 kernel: integrity: Platform Keyring initialized
Jan 10 11:23:51 np0005580781 kernel: integrity: Machine keyring initialized
Jan 10 11:23:51 np0005580781 kernel: Freeing initrd memory: 87840K
Jan 10 11:23:51 np0005580781 kernel: NET: Registered PF_ALG protocol family
Jan 10 11:23:51 np0005580781 kernel: xor: automatically using best checksumming function   avx       
Jan 10 11:23:51 np0005580781 kernel: Key type asymmetric registered
Jan 10 11:23:51 np0005580781 kernel: Asymmetric key parser 'x509' registered
Jan 10 11:23:51 np0005580781 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 10 11:23:51 np0005580781 kernel: io scheduler mq-deadline registered
Jan 10 11:23:51 np0005580781 kernel: io scheduler kyber registered
Jan 10 11:23:51 np0005580781 kernel: io scheduler bfq registered
Jan 10 11:23:51 np0005580781 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 10 11:23:51 np0005580781 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 10 11:23:51 np0005580781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 10 11:23:51 np0005580781 kernel: ACPI: button: Power Button [PWRF]
Jan 10 11:23:51 np0005580781 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 10 11:23:51 np0005580781 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 10 11:23:51 np0005580781 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 10 11:23:51 np0005580781 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 10 11:23:51 np0005580781 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 10 11:23:51 np0005580781 kernel: Non-volatile memory driver v1.3
Jan 10 11:23:51 np0005580781 kernel: rdac: device handler registered
Jan 10 11:23:51 np0005580781 kernel: hp_sw: device handler registered
Jan 10 11:23:51 np0005580781 kernel: emc: device handler registered
Jan 10 11:23:51 np0005580781 kernel: alua: device handler registered
Jan 10 11:23:51 np0005580781 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 10 11:23:51 np0005580781 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 10 11:23:51 np0005580781 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 10 11:23:51 np0005580781 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 10 11:23:51 np0005580781 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 10 11:23:51 np0005580781 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 10 11:23:51 np0005580781 kernel: usb usb1: Product: UHCI Host Controller
Jan 10 11:23:51 np0005580781 kernel: usb usb1: Manufacturer: Linux 5.14.0-655.el9.x86_64 uhci_hcd
Jan 10 11:23:51 np0005580781 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 10 11:23:51 np0005580781 kernel: hub 1-0:1.0: USB hub found
Jan 10 11:23:51 np0005580781 kernel: hub 1-0:1.0: 2 ports detected
Jan 10 11:23:51 np0005580781 kernel: usbcore: registered new interface driver usbserial_generic
Jan 10 11:23:51 np0005580781 kernel: usbserial: USB Serial support registered for generic
Jan 10 11:23:51 np0005580781 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 10 11:23:51 np0005580781 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 10 11:23:51 np0005580781 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 10 11:23:51 np0005580781 kernel: mousedev: PS/2 mouse device common for all mice
Jan 10 11:23:51 np0005580781 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 10 11:23:51 np0005580781 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 10 11:23:51 np0005580781 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 10 11:23:51 np0005580781 kernel: rtc_cmos 00:04: registered as rtc0
Jan 10 11:23:51 np0005580781 kernel: rtc_cmos 00:04: setting system clock to 2026-01-10T16:23:50 UTC (1768062230)
Jan 10 11:23:51 np0005580781 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 10 11:23:51 np0005580781 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 10 11:23:51 np0005580781 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 10 11:23:51 np0005580781 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 10 11:23:51 np0005580781 kernel: usbcore: registered new interface driver usbhid
Jan 10 11:23:51 np0005580781 kernel: usbhid: USB HID core driver
Jan 10 11:23:51 np0005580781 kernel: drop_monitor: Initializing network drop monitor service
Jan 10 11:23:51 np0005580781 kernel: Initializing XFRM netlink socket
Jan 10 11:23:51 np0005580781 kernel: NET: Registered PF_INET6 protocol family
Jan 10 11:23:51 np0005580781 kernel: Segment Routing with IPv6
Jan 10 11:23:51 np0005580781 kernel: NET: Registered PF_PACKET protocol family
Jan 10 11:23:51 np0005580781 kernel: mpls_gso: MPLS GSO support
Jan 10 11:23:51 np0005580781 kernel: IPI shorthand broadcast: enabled
Jan 10 11:23:51 np0005580781 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 10 11:23:51 np0005580781 kernel: AES CTR mode by8 optimization enabled
Jan 10 11:23:51 np0005580781 kernel: sched_clock: Marking stable (1267005970, 143396570)->(1524862889, -114460349)
Jan 10 11:23:51 np0005580781 kernel: registered taskstats version 1
Jan 10 11:23:51 np0005580781 kernel: Loading compiled-in X.509 certificates
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: cff02aed51f99e4030f8d5c362e1fce40d054fe7'
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 10 11:23:51 np0005580781 kernel: Demotion targets for Node 0: null
Jan 10 11:23:51 np0005580781 kernel: page_owner is disabled
Jan 10 11:23:51 np0005580781 kernel: Key type .fscrypt registered
Jan 10 11:23:51 np0005580781 kernel: Key type fscrypt-provisioning registered
Jan 10 11:23:51 np0005580781 kernel: Key type big_key registered
Jan 10 11:23:51 np0005580781 kernel: Key type encrypted registered
Jan 10 11:23:51 np0005580781 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 10 11:23:51 np0005580781 kernel: Loading compiled-in module X.509 certificates
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: cff02aed51f99e4030f8d5c362e1fce40d054fe7'
Jan 10 11:23:51 np0005580781 kernel: ima: Allocated hash algorithm: sha256
Jan 10 11:23:51 np0005580781 kernel: ima: No architecture policies found
Jan 10 11:23:51 np0005580781 kernel: evm: Initialising EVM extended attributes:
Jan 10 11:23:51 np0005580781 kernel: evm: security.selinux
Jan 10 11:23:51 np0005580781 kernel: evm: security.SMACK64 (disabled)
Jan 10 11:23:51 np0005580781 kernel: evm: security.SMACK64EXEC (disabled)
Jan 10 11:23:51 np0005580781 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 10 11:23:51 np0005580781 kernel: evm: security.SMACK64MMAP (disabled)
Jan 10 11:23:51 np0005580781 kernel: evm: security.apparmor (disabled)
Jan 10 11:23:51 np0005580781 kernel: evm: security.ima
Jan 10 11:23:51 np0005580781 kernel: evm: security.capability
Jan 10 11:23:51 np0005580781 kernel: evm: HMAC attrs: 0x1
Jan 10 11:23:51 np0005580781 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 10 11:23:51 np0005580781 kernel: Running certificate verification RSA selftest
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 10 11:23:51 np0005580781 kernel: Running certificate verification ECDSA selftest
Jan 10 11:23:51 np0005580781 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 10 11:23:51 np0005580781 kernel: clk: Disabling unused clocks
Jan 10 11:23:51 np0005580781 kernel: Freeing unused decrypted memory: 2028K
Jan 10 11:23:51 np0005580781 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 10 11:23:51 np0005580781 kernel: Write protecting the kernel read-only data: 30720k
Jan 10 11:23:51 np0005580781 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Jan 10 11:23:51 np0005580781 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 10 11:23:51 np0005580781 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 10 11:23:51 np0005580781 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 10 11:23:51 np0005580781 kernel: usb 1-1: Manufacturer: QEMU
Jan 10 11:23:51 np0005580781 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 10 11:23:51 np0005580781 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 10 11:23:51 np0005580781 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 10 11:23:51 np0005580781 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 10 11:23:51 np0005580781 kernel: Run /init as init process
Jan 10 11:23:51 np0005580781 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 10 11:23:51 np0005580781 systemd: Detected virtualization kvm.
Jan 10 11:23:51 np0005580781 systemd: Detected architecture x86-64.
Jan 10 11:23:51 np0005580781 systemd: Running in initrd.
Jan 10 11:23:51 np0005580781 systemd: No hostname configured, using default hostname.
Jan 10 11:23:51 np0005580781 systemd: Hostname set to <localhost>.
Jan 10 11:23:51 np0005580781 systemd: Initializing machine ID from VM UUID.
Jan 10 11:23:51 np0005580781 systemd: Queued start job for default target Initrd Default Target.
Jan 10 11:23:51 np0005580781 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 10 11:23:51 np0005580781 systemd: Reached target Local Encrypted Volumes.
Jan 10 11:23:51 np0005580781 systemd: Reached target Initrd /usr File System.
Jan 10 11:23:51 np0005580781 systemd: Reached target Local File Systems.
Jan 10 11:23:51 np0005580781 systemd: Reached target Path Units.
Jan 10 11:23:51 np0005580781 systemd: Reached target Slice Units.
Jan 10 11:23:51 np0005580781 systemd: Reached target Swaps.
Jan 10 11:23:51 np0005580781 systemd: Reached target Timer Units.
Jan 10 11:23:51 np0005580781 systemd: Listening on D-Bus System Message Bus Socket.
Jan 10 11:23:51 np0005580781 systemd: Listening on Journal Socket (/dev/log).
Jan 10 11:23:51 np0005580781 systemd: Listening on Journal Socket.
Jan 10 11:23:51 np0005580781 systemd: Listening on udev Control Socket.
Jan 10 11:23:51 np0005580781 systemd: Listening on udev Kernel Socket.
Jan 10 11:23:51 np0005580781 systemd: Reached target Socket Units.
Jan 10 11:23:51 np0005580781 systemd: Starting Create List of Static Device Nodes...
Jan 10 11:23:51 np0005580781 systemd: Starting Journal Service...
Jan 10 11:23:51 np0005580781 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 10 11:23:51 np0005580781 systemd: Starting Apply Kernel Variables...
Jan 10 11:23:51 np0005580781 systemd: Starting Create System Users...
Jan 10 11:23:51 np0005580781 systemd: Starting Setup Virtual Console...
Jan 10 11:23:51 np0005580781 systemd: Finished Create List of Static Device Nodes.
Jan 10 11:23:51 np0005580781 systemd: Finished Apply Kernel Variables.
Jan 10 11:23:51 np0005580781 systemd-journald[309]: Journal started
Jan 10 11:23:51 np0005580781 systemd-journald[309]: Runtime Journal (/run/log/journal/a9d7d54472dd4b089e5e495057bde287) is 8.0M, max 153.6M, 145.6M free.
Jan 10 11:23:51 np0005580781 systemd: Started Journal Service.
Jan 10 11:23:51 np0005580781 systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 10 11:23:51 np0005580781 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 10 11:23:51 np0005580781 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 10 11:23:51 np0005580781 systemd[1]: Finished Create System Users.
Jan 10 11:23:51 np0005580781 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 10 11:23:51 np0005580781 systemd[1]: Starting Create Volatile Files and Directories...
Jan 10 11:23:51 np0005580781 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 10 11:23:51 np0005580781 systemd[1]: Finished Setup Virtual Console.
Jan 10 11:23:51 np0005580781 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 10 11:23:51 np0005580781 systemd[1]: Starting dracut cmdline hook...
Jan 10 11:23:51 np0005580781 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 10 11:23:51 np0005580781 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 10 11:23:51 np0005580781 systemd[1]: Finished Create Volatile Files and Directories.
Jan 10 11:23:51 np0005580781 systemd[1]: Finished dracut cmdline hook.
Jan 10 11:23:51 np0005580781 systemd[1]: Starting dracut pre-udev hook...
Jan 10 11:23:51 np0005580781 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 10 11:23:51 np0005580781 kernel: device-mapper: uevent: version 1.0.3
Jan 10 11:23:51 np0005580781 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 10 11:23:51 np0005580781 kernel: RPC: Registered named UNIX socket transport module.
Jan 10 11:23:51 np0005580781 kernel: RPC: Registered udp transport module.
Jan 10 11:23:51 np0005580781 kernel: RPC: Registered tcp transport module.
Jan 10 11:23:51 np0005580781 kernel: RPC: Registered tcp-with-tls transport module.
Jan 10 11:23:51 np0005580781 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 10 11:23:51 np0005580781 rpc.statd[448]: Version 2.5.4 starting
Jan 10 11:23:51 np0005580781 rpc.statd[448]: Initializing NSM state
Jan 10 11:23:51 np0005580781 rpc.idmapd[453]: Setting log level to 0
Jan 10 11:23:51 np0005580781 systemd[1]: Finished dracut pre-udev hook.
Jan 10 11:23:51 np0005580781 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 10 11:23:51 np0005580781 systemd-udevd[466]: Using default interface naming scheme 'rhel-9.0'.
Jan 10 11:23:51 np0005580781 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 10 11:23:51 np0005580781 systemd[1]: Starting dracut pre-trigger hook...
Jan 10 11:23:51 np0005580781 systemd[1]: Finished dracut pre-trigger hook.
Jan 10 11:23:51 np0005580781 systemd[1]: Starting Coldplug All udev Devices...
Jan 10 11:23:52 np0005580781 systemd[1]: Created slice Slice /system/modprobe.
Jan 10 11:23:52 np0005580781 systemd[1]: Starting Load Kernel Module configfs...
Jan 10 11:23:52 np0005580781 systemd[1]: Finished Coldplug All udev Devices.
Jan 10 11:23:52 np0005580781 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 10 11:23:52 np0005580781 systemd[1]: Finished Load Kernel Module configfs.
Jan 10 11:23:52 np0005580781 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 10 11:23:52 np0005580781 systemd[1]: Reached target Network.
Jan 10 11:23:52 np0005580781 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 10 11:23:52 np0005580781 systemd[1]: Starting dracut initqueue hook...
Jan 10 11:23:52 np0005580781 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 10 11:23:52 np0005580781 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 10 11:23:52 np0005580781 kernel: vda: vda1
Jan 10 11:23:52 np0005580781 kernel: scsi host0: ata_piix
Jan 10 11:23:52 np0005580781 kernel: scsi host1: ata_piix
Jan 10 11:23:52 np0005580781 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 10 11:23:52 np0005580781 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 10 11:23:52 np0005580781 systemd[1]: Found device /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc.
Jan 10 11:23:52 np0005580781 systemd[1]: Reached target Initrd Root Device.
Jan 10 11:23:52 np0005580781 systemd[1]: Mounting Kernel Configuration File System...
Jan 10 11:23:52 np0005580781 systemd[1]: Mounted Kernel Configuration File System.
Jan 10 11:23:52 np0005580781 systemd[1]: Reached target System Initialization.
Jan 10 11:23:52 np0005580781 systemd[1]: Reached target Basic System.
Jan 10 11:23:52 np0005580781 kernel: ata1: found unknown device (class 0)
Jan 10 11:23:52 np0005580781 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 10 11:23:52 np0005580781 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 10 11:23:52 np0005580781 systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 11:23:52 np0005580781 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 10 11:23:52 np0005580781 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 10 11:23:52 np0005580781 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 10 11:23:52 np0005580781 systemd[1]: Finished dracut initqueue hook.
Jan 10 11:23:52 np0005580781 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 10 11:23:52 np0005580781 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 10 11:23:52 np0005580781 systemd[1]: Reached target Remote File Systems.
Jan 10 11:23:52 np0005580781 systemd[1]: Starting dracut pre-mount hook...
Jan 10 11:23:52 np0005580781 systemd[1]: Finished dracut pre-mount hook.
Jan 10 11:23:52 np0005580781 systemd[1]: Starting File System Check on /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc...
Jan 10 11:23:52 np0005580781 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Jan 10 11:23:52 np0005580781 systemd[1]: Finished File System Check on /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc.
Jan 10 11:23:52 np0005580781 systemd[1]: Mounting /sysroot...
Jan 10 11:23:53 np0005580781 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 10 11:23:53 np0005580781 kernel: XFS (vda1): Mounting V5 Filesystem f2a0a5c1-133f-4977-b837-e40b31cbd9cc
Jan 10 11:23:53 np0005580781 kernel: XFS (vda1): Ending clean mount
Jan 10 11:23:53 np0005580781 systemd[1]: Mounted /sysroot.
Jan 10 11:23:53 np0005580781 systemd[1]: Reached target Initrd Root File System.
Jan 10 11:23:53 np0005580781 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 10 11:23:53 np0005580781 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 10 11:23:53 np0005580781 systemd[1]: Reached target Initrd File Systems.
Jan 10 11:23:53 np0005580781 systemd[1]: Reached target Initrd Default Target.
Jan 10 11:23:53 np0005580781 systemd[1]: Starting dracut mount hook...
Jan 10 11:23:53 np0005580781 systemd[1]: Finished dracut mount hook.
Jan 10 11:23:53 np0005580781 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 10 11:23:53 np0005580781 rpc.idmapd[453]: exiting on signal 15
Jan 10 11:23:53 np0005580781 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 10 11:23:53 np0005580781 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Network.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Timer Units.
Jan 10 11:23:53 np0005580781 systemd[1]: dbus.socket: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 10 11:23:53 np0005580781 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Initrd Default Target.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Basic System.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Initrd Root Device.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Initrd /usr File System.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Path Units.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Remote File Systems.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Slice Units.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Socket Units.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target System Initialization.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Local File Systems.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Swaps.
Jan 10 11:23:53 np0005580781 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped dracut mount hook.
Jan 10 11:23:53 np0005580781 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped dracut pre-mount hook.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 10 11:23:53 np0005580781 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped dracut initqueue hook.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Apply Kernel Variables.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Coldplug All udev Devices.
Jan 10 11:23:53 np0005580781 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped dracut pre-trigger hook.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Setup Virtual Console.
Jan 10 11:23:53 np0005580781 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Closed udev Control Socket.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Closed udev Kernel Socket.
Jan 10 11:23:53 np0005580781 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped dracut pre-udev hook.
Jan 10 11:23:53 np0005580781 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped dracut cmdline hook.
Jan 10 11:23:53 np0005580781 systemd[1]: Starting Cleanup udev Database...
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 10 11:23:53 np0005580781 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 10 11:23:53 np0005580781 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Stopped Create System Users.
Jan 10 11:23:53 np0005580781 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 10 11:23:53 np0005580781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 10 11:23:53 np0005580781 systemd[1]: Finished Cleanup udev Database.
Jan 10 11:23:53 np0005580781 systemd[1]: Reached target Switch Root.
Jan 10 11:23:53 np0005580781 systemd[1]: Starting Switch Root...
Jan 10 11:23:53 np0005580781 systemd[1]: Switching root.
Jan 10 11:23:53 np0005580781 systemd-journald[309]: Journal stopped
Jan 10 11:23:54 np0005580781 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 10 11:23:54 np0005580781 kernel: audit: type=1404 audit(1768062233.571:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 10 11:23:54 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 11:23:54 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 11:23:54 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 11:23:54 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 11:23:54 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 11:23:54 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 11:23:54 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 11:23:54 np0005580781 kernel: audit: type=1403 audit(1768062233.700:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 10 11:23:54 np0005580781 systemd: Successfully loaded SELinux policy in 132.940ms.
Jan 10 11:23:54 np0005580781 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.798ms.
Jan 10 11:23:54 np0005580781 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 10 11:23:54 np0005580781 systemd: Detected virtualization kvm.
Jan 10 11:23:54 np0005580781 systemd: Detected architecture x86-64.
Jan 10 11:23:54 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:23:54 np0005580781 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 10 11:23:54 np0005580781 systemd: Stopped Switch Root.
Jan 10 11:23:54 np0005580781 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 10 11:23:54 np0005580781 systemd: Created slice Slice /system/getty.
Jan 10 11:23:54 np0005580781 systemd: Created slice Slice /system/serial-getty.
Jan 10 11:23:54 np0005580781 systemd: Created slice Slice /system/sshd-keygen.
Jan 10 11:23:54 np0005580781 systemd: Created slice User and Session Slice.
Jan 10 11:23:54 np0005580781 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 10 11:23:54 np0005580781 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 10 11:23:54 np0005580781 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 10 11:23:54 np0005580781 systemd: Reached target Local Encrypted Volumes.
Jan 10 11:23:54 np0005580781 systemd: Stopped target Switch Root.
Jan 10 11:23:54 np0005580781 systemd: Stopped target Initrd File Systems.
Jan 10 11:23:54 np0005580781 systemd: Stopped target Initrd Root File System.
Jan 10 11:23:54 np0005580781 systemd: Reached target Local Integrity Protected Volumes.
Jan 10 11:23:54 np0005580781 systemd: Reached target Path Units.
Jan 10 11:23:54 np0005580781 systemd: Reached target rpc_pipefs.target.
Jan 10 11:23:54 np0005580781 systemd: Reached target Slice Units.
Jan 10 11:23:54 np0005580781 systemd: Reached target Swaps.
Jan 10 11:23:54 np0005580781 systemd: Reached target Local Verity Protected Volumes.
Jan 10 11:23:54 np0005580781 systemd: Listening on RPCbind Server Activation Socket.
Jan 10 11:23:54 np0005580781 systemd: Reached target RPC Port Mapper.
Jan 10 11:23:54 np0005580781 systemd: Listening on Process Core Dump Socket.
Jan 10 11:23:54 np0005580781 systemd: Listening on initctl Compatibility Named Pipe.
Jan 10 11:23:54 np0005580781 systemd: Listening on udev Control Socket.
Jan 10 11:23:54 np0005580781 systemd: Listening on udev Kernel Socket.
Jan 10 11:23:54 np0005580781 systemd: Mounting Huge Pages File System...
Jan 10 11:23:54 np0005580781 systemd: Mounting POSIX Message Queue File System...
Jan 10 11:23:54 np0005580781 systemd: Mounting Kernel Debug File System...
Jan 10 11:23:54 np0005580781 systemd: Mounting Kernel Trace File System...
Jan 10 11:23:54 np0005580781 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 10 11:23:54 np0005580781 systemd: Starting Create List of Static Device Nodes...
Jan 10 11:23:54 np0005580781 systemd: Starting Load Kernel Module configfs...
Jan 10 11:23:54 np0005580781 systemd: Starting Load Kernel Module drm...
Jan 10 11:23:54 np0005580781 systemd: Starting Load Kernel Module efi_pstore...
Jan 10 11:23:54 np0005580781 systemd: Starting Load Kernel Module fuse...
Jan 10 11:23:54 np0005580781 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 10 11:23:54 np0005580781 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 10 11:23:54 np0005580781 systemd: Stopped File System Check on Root Device.
Jan 10 11:23:54 np0005580781 systemd: Stopped Journal Service.
Jan 10 11:23:54 np0005580781 systemd: Starting Journal Service...
Jan 10 11:23:54 np0005580781 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 10 11:23:54 np0005580781 systemd: Starting Generate network units from Kernel command line...
Jan 10 11:23:54 np0005580781 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 10 11:23:54 np0005580781 systemd: Starting Remount Root and Kernel File Systems...
Jan 10 11:23:54 np0005580781 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 10 11:23:54 np0005580781 systemd: Starting Apply Kernel Variables...
Jan 10 11:23:54 np0005580781 systemd-journald[679]: Journal started
Jan 10 11:23:54 np0005580781 systemd-journald[679]: Runtime Journal (/run/log/journal/bfa963f84c4f244b9e78b91a43b5e88e) is 8.0M, max 153.6M, 145.6M free.
Jan 10 11:23:54 np0005580781 systemd[1]: Queued start job for default target Multi-User System.
Jan 10 11:23:54 np0005580781 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 10 11:23:54 np0005580781 systemd: Starting Coldplug All udev Devices...
Jan 10 11:23:54 np0005580781 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 10 11:23:54 np0005580781 kernel: fuse: init (API version 7.37)
Jan 10 11:23:54 np0005580781 systemd: Started Journal Service.
Jan 10 11:23:54 np0005580781 systemd[1]: Mounted Huge Pages File System.
Jan 10 11:23:54 np0005580781 systemd[1]: Mounted POSIX Message Queue File System.
Jan 10 11:23:54 np0005580781 systemd[1]: Mounted Kernel Debug File System.
Jan 10 11:23:54 np0005580781 systemd[1]: Mounted Kernel Trace File System.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Create List of Static Device Nodes.
Jan 10 11:23:54 np0005580781 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Load Kernel Module configfs.
Jan 10 11:23:54 np0005580781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 10 11:23:54 np0005580781 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Load Kernel Module fuse.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Generate network units from Kernel command line.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Apply Kernel Variables.
Jan 10 11:23:54 np0005580781 kernel: ACPI: bus type drm_connector registered
Jan 10 11:23:54 np0005580781 systemd[1]: Mounting FUSE Control File System...
Jan 10 11:23:54 np0005580781 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Rebuild Hardware Database...
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 10 11:23:54 np0005580781 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Load/Save OS Random Seed...
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Create System Users...
Jan 10 11:23:54 np0005580781 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Load Kernel Module drm.
Jan 10 11:23:54 np0005580781 systemd[1]: Mounted FUSE Control File System.
Jan 10 11:23:54 np0005580781 systemd-journald[679]: Runtime Journal (/run/log/journal/bfa963f84c4f244b9e78b91a43b5e88e) is 8.0M, max 153.6M, 145.6M free.
Jan 10 11:23:54 np0005580781 systemd-journald[679]: Received client request to flush runtime journal.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Load/Save OS Random Seed.
Jan 10 11:23:54 np0005580781 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Create System Users.
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Coldplug All udev Devices.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 10 11:23:54 np0005580781 systemd[1]: Reached target Preparation for Local File Systems.
Jan 10 11:23:54 np0005580781 systemd[1]: Reached target Local File Systems.
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 10 11:23:54 np0005580781 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 10 11:23:54 np0005580781 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 10 11:23:54 np0005580781 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Automatic Boot Loader Update...
Jan 10 11:23:54 np0005580781 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Create Volatile Files and Directories...
Jan 10 11:23:54 np0005580781 bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Automatic Boot Loader Update.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Create Volatile Files and Directories.
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Security Auditing Service...
Jan 10 11:23:54 np0005580781 systemd[1]: Starting RPC Bind...
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Rebuild Journal Catalog...
Jan 10 11:23:54 np0005580781 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 10 11:23:54 np0005580781 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Rebuild Journal Catalog.
Jan 10 11:23:54 np0005580781 augenrules[708]: /sbin/augenrules: No change
Jan 10 11:23:54 np0005580781 systemd[1]: Started RPC Bind.
Jan 10 11:23:54 np0005580781 augenrules[723]: No rules
Jan 10 11:23:54 np0005580781 augenrules[723]: enabled 1
Jan 10 11:23:54 np0005580781 augenrules[723]: failure 1
Jan 10 11:23:54 np0005580781 augenrules[723]: pid 702
Jan 10 11:23:54 np0005580781 augenrules[723]: rate_limit 0
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_limit 8192
Jan 10 11:23:54 np0005580781 augenrules[723]: lost 0
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog 4
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_wait_time 60000
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_wait_time_actual 0
Jan 10 11:23:54 np0005580781 augenrules[723]: enabled 1
Jan 10 11:23:54 np0005580781 augenrules[723]: failure 1
Jan 10 11:23:54 np0005580781 augenrules[723]: pid 702
Jan 10 11:23:54 np0005580781 augenrules[723]: rate_limit 0
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_limit 8192
Jan 10 11:23:54 np0005580781 augenrules[723]: lost 0
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog 4
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_wait_time 60000
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_wait_time_actual 0
Jan 10 11:23:54 np0005580781 augenrules[723]: enabled 1
Jan 10 11:23:54 np0005580781 augenrules[723]: failure 1
Jan 10 11:23:54 np0005580781 augenrules[723]: pid 702
Jan 10 11:23:54 np0005580781 augenrules[723]: rate_limit 0
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_limit 8192
Jan 10 11:23:54 np0005580781 augenrules[723]: lost 0
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog 4
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_wait_time 60000
Jan 10 11:23:54 np0005580781 augenrules[723]: backlog_wait_time_actual 0
Jan 10 11:23:54 np0005580781 systemd[1]: Started Security Auditing Service.
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Rebuild Hardware Database.
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 10 11:23:54 np0005580781 systemd[1]: Starting Update is Completed...
Jan 10 11:23:54 np0005580781 systemd[1]: Finished Update is Completed.
Jan 10 11:23:55 np0005580781 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Jan 10 11:23:55 np0005580781 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 10 11:23:55 np0005580781 systemd[1]: Reached target System Initialization.
Jan 10 11:23:55 np0005580781 systemd[1]: Started dnf makecache --timer.
Jan 10 11:23:55 np0005580781 systemd[1]: Started Daily rotation of log files.
Jan 10 11:23:55 np0005580781 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 10 11:23:55 np0005580781 systemd[1]: Reached target Timer Units.
Jan 10 11:23:55 np0005580781 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 10 11:23:55 np0005580781 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 10 11:23:55 np0005580781 systemd[1]: Reached target Socket Units.
Jan 10 11:23:55 np0005580781 systemd[1]: Starting D-Bus System Message Bus...
Jan 10 11:23:55 np0005580781 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 10 11:23:55 np0005580781 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 10 11:23:55 np0005580781 systemd[1]: Starting Load Kernel Module configfs...
Jan 10 11:23:55 np0005580781 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 10 11:23:55 np0005580781 systemd[1]: Finished Load Kernel Module configfs.
Jan 10 11:23:55 np0005580781 systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 11:23:55 np0005580781 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 10 11:23:55 np0005580781 systemd[1]: Started D-Bus System Message Bus.
Jan 10 11:23:55 np0005580781 systemd[1]: Reached target Basic System.
Jan 10 11:23:55 np0005580781 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 10 11:23:55 np0005580781 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 10 11:23:55 np0005580781 dbus-broker-lau[744]: Ready
Jan 10 11:23:55 np0005580781 systemd[1]: Starting NTP client/server...
Jan 10 11:23:55 np0005580781 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 10 11:23:55 np0005580781 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 10 11:23:55 np0005580781 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 10 11:23:55 np0005580781 chronyd[785]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 10 11:23:55 np0005580781 chronyd[785]: Loaded 0 symmetric keys
Jan 10 11:23:55 np0005580781 chronyd[785]: Using right/UTC timezone to obtain leap second data
Jan 10 11:23:55 np0005580781 chronyd[785]: Loaded seccomp filter (level 2)
Jan 10 11:23:55 np0005580781 systemd[1]: Starting IPv4 firewall with iptables...
Jan 10 11:23:55 np0005580781 systemd[1]: Started irqbalance daemon.
Jan 10 11:23:55 np0005580781 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 10 11:23:55 np0005580781 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 11:23:55 np0005580781 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 11:23:55 np0005580781 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 11:23:55 np0005580781 systemd[1]: Reached target sshd-keygen.target.
Jan 10 11:23:55 np0005580781 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 10 11:23:55 np0005580781 systemd[1]: Reached target User and Group Name Lookups.
Jan 10 11:23:55 np0005580781 systemd[1]: Starting User Login Management...
Jan 10 11:23:55 np0005580781 systemd[1]: Started NTP client/server.
Jan 10 11:23:55 np0005580781 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 10 11:23:55 np0005580781 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 10 11:23:55 np0005580781 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 10 11:23:55 np0005580781 kernel: kvm_amd: TSC scaling supported
Jan 10 11:23:55 np0005580781 kernel: kvm_amd: Nested Virtualization enabled
Jan 10 11:23:55 np0005580781 kernel: kvm_amd: Nested Paging enabled
Jan 10 11:23:55 np0005580781 kernel: kvm_amd: LBR virtualization supported
Jan 10 11:23:55 np0005580781 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 10 11:23:55 np0005580781 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 10 11:23:55 np0005580781 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 10 11:23:55 np0005580781 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 10 11:23:55 np0005580781 kernel: Console: switching to colour dummy device 80x25
Jan 10 11:23:55 np0005580781 systemd-logind[798]: New seat seat0.
Jan 10 11:23:55 np0005580781 systemd[1]: Started User Login Management.
Jan 10 11:23:55 np0005580781 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 10 11:23:55 np0005580781 kernel: [drm] features: -context_init
Jan 10 11:23:55 np0005580781 kernel: [drm] number of scanouts: 1
Jan 10 11:23:55 np0005580781 kernel: [drm] number of cap sets: 0
Jan 10 11:23:55 np0005580781 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 10 11:23:55 np0005580781 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 10 11:23:55 np0005580781 kernel: Console: switching to colour frame buffer device 128x48
Jan 10 11:23:55 np0005580781 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 10 11:23:55 np0005580781 iptables.init[789]: iptables: Applying firewall rules: [  OK  ]
Jan 10 11:23:55 np0005580781 systemd[1]: Finished IPv4 firewall with iptables.
Jan 10 11:23:55 np0005580781 cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 10 Jan 2026 16:23:55 +0000. Up 6.36 seconds.
Jan 10 11:23:55 np0005580781 systemd[1]: run-cloud\x2dinit-tmp-tmpnpub8fvx.mount: Deactivated successfully.
Jan 10 11:23:55 np0005580781 systemd[1]: Starting Hostname Service...
Jan 10 11:23:56 np0005580781 systemd[1]: Started Hostname Service.
Jan 10 11:23:56 np0005580781 systemd-hostnamed[855]: Hostname set to <np0005580781.novalocal> (static)
Jan 10 11:23:56 np0005580781 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 10 11:23:56 np0005580781 systemd[1]: Reached target Preparation for Network.
Jan 10 11:23:56 np0005580781 systemd[1]: Starting Network Manager...
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2253] NetworkManager (version 1.54.2-1.el9) is starting... (boot:bad47697-514b-4229-8b29-23921a9a6958)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2260] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2334] manager[0x56209a672000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2388] hostname: hostname: using hostnamed
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2388] hostname: static hostname changed from (none) to "np0005580781.novalocal"
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2393] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2514] manager[0x56209a672000]: rfkill: Wi-Fi hardware radio set enabled
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2515] manager[0x56209a672000]: rfkill: WWAN hardware radio set enabled
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2553] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2554] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2555] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2555] manager: Networking is enabled by state file
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2556] settings: Loaded settings plugin: keyfile (internal)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2565] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2582] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2595] dhcp: init: Using DHCP client 'internal'
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2597] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2609] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 10 11:23:56 np0005580781 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2615] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2625] device (lo): Activation: starting connection 'lo' (d627873a-279e-4130-ac7c-6a2872dc6445)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2633] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2636] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2660] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2665] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2668] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2670] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2673] device (eth0): carrier: link connected
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2677] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2684] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2691] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2695] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2695] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2698] manager: NetworkManager state is now CONNECTING
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2699] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2710] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2713] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2749] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2755] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.2773] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:23:56 np0005580781 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 11:23:56 np0005580781 systemd[1]: Started Network Manager.
Jan 10 11:23:56 np0005580781 systemd[1]: Reached target Network.
Jan 10 11:23:56 np0005580781 systemd[1]: Starting Network Manager Wait Online...
Jan 10 11:23:56 np0005580781 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 10 11:23:56 np0005580781 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 10 11:23:56 np0005580781 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 10 11:23:56 np0005580781 systemd[1]: Reached target NFS client services.
Jan 10 11:23:56 np0005580781 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3197] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3200] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3204] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 10 11:23:56 np0005580781 systemd[1]: Reached target Remote File Systems.
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3218] device (lo): Activation: successful, device activated.
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3227] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3233] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 11:23:56 np0005580781 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3241] device (eth0): Activation: successful, device activated.
Jan 10 11:23:56 np0005580781 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3258] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 10 11:23:56 np0005580781 NetworkManager[859]: <info>  [1768062236.3271] manager: startup complete
Jan 10 11:23:56 np0005580781 systemd[1]: Finished Network Manager Wait Online.
Jan 10 11:23:56 np0005580781 systemd[1]: Starting Cloud-init: Network Stage...
Jan 10 11:23:56 np0005580781 cloud-init[922]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 10 Jan 2026 16:23:56 +0000. Up 7.36 seconds.
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.74        | 255.255.255.0 | global | fa:16:3e:49:0e:aa |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe49:eaa/64 |       .       |  link  | fa:16:3e:49:0e:aa |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 10 11:23:56 np0005580781 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 10 11:23:58 np0005580781 cloud-init[922]: Generating public/private rsa key pair.
Jan 10 11:23:58 np0005580781 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 10 11:23:58 np0005580781 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 10 11:23:58 np0005580781 cloud-init[922]: The key fingerprint is:
Jan 10 11:23:58 np0005580781 cloud-init[922]: SHA256:NwbL++5yrOKwEryJ6OBZfWK1r7uXkUNzc9raTCTtoYo root@np0005580781.novalocal
Jan 10 11:23:58 np0005580781 cloud-init[922]: The key's randomart image is:
Jan 10 11:23:58 np0005580781 cloud-init[922]: +---[RSA 3072]----+
Jan 10 11:23:58 np0005580781 cloud-init[922]: |                 |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |                 |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |        .   .    |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |       . = + =   |
Jan 10 11:23:58 np0005580781 cloud-init[922]: | .     .S B X .  |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |  o . . .B + +   |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |o. =.+ oo.= =    |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |= * .o+Eo=o. o   |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |.+ ....=*B+      |
Jan 10 11:23:58 np0005580781 cloud-init[922]: +----[SHA256]-----+
Jan 10 11:23:58 np0005580781 cloud-init[922]: Generating public/private ecdsa key pair.
Jan 10 11:23:58 np0005580781 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 10 11:23:58 np0005580781 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 10 11:23:58 np0005580781 cloud-init[922]: The key fingerprint is:
Jan 10 11:23:58 np0005580781 cloud-init[922]: SHA256:Q5P44HImb40twqNrJHfa+flB24JjYHv9nVHYCLjTez0 root@np0005580781.novalocal
Jan 10 11:23:58 np0005580781 cloud-init[922]: The key's randomart image is:
Jan 10 11:23:58 np0005580781 cloud-init[922]: +---[ECDSA 256]---+
Jan 10 11:23:58 np0005580781 cloud-init[922]: |                 |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |       . o       |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |      o = .      |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |     . + + . +   |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |    = + S . o o  |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |. oo.O B = . o   |
Jan 10 11:23:58 np0005580781 cloud-init[922]: | + +=.X B o o E  |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |  o.o* + + o o . |
Jan 10 11:23:58 np0005580781 cloud-init[922]: | .o. .o.. . o    |
Jan 10 11:23:58 np0005580781 cloud-init[922]: +----[SHA256]-----+
Jan 10 11:23:58 np0005580781 cloud-init[922]: Generating public/private ed25519 key pair.
Jan 10 11:23:58 np0005580781 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 10 11:23:58 np0005580781 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 10 11:23:58 np0005580781 cloud-init[922]: The key fingerprint is:
Jan 10 11:23:58 np0005580781 cloud-init[922]: SHA256:NBCxV9OdqOJixAGHwcnNHRsQ/U445B6s8QNi7vZfqXU root@np0005580781.novalocal
Jan 10 11:23:58 np0005580781 cloud-init[922]: The key's randomart image is:
Jan 10 11:23:58 np0005580781 cloud-init[922]: +--[ED25519 256]--+
Jan 10 11:23:58 np0005580781 cloud-init[922]: |   ooBBBo.o. o . |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |    =.+o++ .o o  |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |     ..==o .     |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |    o =oB.+      |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |   o o BS*       |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |    . + = ..     |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |   . . . .+ E    |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |    o    + .     |
Jan 10 11:23:58 np0005580781 cloud-init[922]: |   . ...o        |
Jan 10 11:23:58 np0005580781 cloud-init[922]: +----[SHA256]-----+
Jan 10 11:23:58 np0005580781 systemd[1]: Finished Cloud-init: Network Stage.
Jan 10 11:23:58 np0005580781 systemd[1]: Reached target Cloud-config availability.
Jan 10 11:23:58 np0005580781 systemd[1]: Reached target Network is Online.
Jan 10 11:23:58 np0005580781 systemd[1]: Starting Cloud-init: Config Stage...
Jan 10 11:23:58 np0005580781 systemd[1]: Starting Crash recovery kernel arming...
Jan 10 11:23:58 np0005580781 systemd[1]: Starting Notify NFS peers of a restart...
Jan 10 11:23:58 np0005580781 systemd[1]: Starting System Logging Service...
Jan 10 11:23:58 np0005580781 sm-notify[1005]: Version 2.5.4 starting
Jan 10 11:23:58 np0005580781 systemd[1]: Starting OpenSSH server daemon...
Jan 10 11:23:58 np0005580781 systemd[1]: Starting Permit User Sessions...
Jan 10 11:23:58 np0005580781 systemd[1]: Started Notify NFS peers of a restart.
Jan 10 11:23:58 np0005580781 systemd[1]: Finished Permit User Sessions.
Jan 10 11:23:58 np0005580781 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 10 11:23:58 np0005580781 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 10 11:23:58 np0005580781 systemd[1]: Started Command Scheduler.
Jan 10 11:23:58 np0005580781 systemd[1]: Started Getty on tty1.
Jan 10 11:23:58 np0005580781 systemd[1]: Started Serial Getty on ttyS0.
Jan 10 11:23:58 np0005580781 systemd[1]: Reached target Login Prompts.
Jan 10 11:23:58 np0005580781 systemd[1]: Started OpenSSH server daemon.
Jan 10 11:23:58 np0005580781 systemd[1]: Started System Logging Service.
Jan 10 11:23:58 np0005580781 systemd[1]: Reached target Multi-User System.
Jan 10 11:23:58 np0005580781 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 10 11:23:58 np0005580781 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 10 11:23:58 np0005580781 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 10 11:23:58 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 11:23:58 np0005580781 kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Jan 10 11:23:58 np0005580781 kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-655.el9.x86_64kdump.img
Jan 10 11:23:58 np0005580781 cloud-init[1103]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 10 Jan 2026 16:23:58 +0000. Up 9.16 seconds.
Jan 10 11:23:58 np0005580781 systemd[1]: Finished Cloud-init: Config Stage.
Jan 10 11:23:58 np0005580781 systemd[1]: Starting Cloud-init: Final Stage...
Jan 10 11:23:58 np0005580781 cloud-init[1266]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 10 Jan 2026 16:23:58 +0000. Up 9.58 seconds.
Jan 10 11:23:58 np0005580781 dracut[1270]: dracut-057-102.git20250818.el9
Jan 10 11:23:58 np0005580781 cloud-init[1287]: #############################################################
Jan 10 11:23:58 np0005580781 cloud-init[1288]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 10 11:23:58 np0005580781 cloud-init[1290]: 256 SHA256:Q5P44HImb40twqNrJHfa+flB24JjYHv9nVHYCLjTez0 root@np0005580781.novalocal (ECDSA)
Jan 10 11:23:58 np0005580781 cloud-init[1292]: 256 SHA256:NBCxV9OdqOJixAGHwcnNHRsQ/U445B6s8QNi7vZfqXU root@np0005580781.novalocal (ED25519)
Jan 10 11:23:58 np0005580781 cloud-init[1294]: 3072 SHA256:NwbL++5yrOKwEryJ6OBZfWK1r7uXkUNzc9raTCTtoYo root@np0005580781.novalocal (RSA)
Jan 10 11:23:58 np0005580781 cloud-init[1295]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 10 11:23:58 np0005580781 cloud-init[1296]: #############################################################
Jan 10 11:23:59 np0005580781 cloud-init[1266]: Cloud-init v. 24.4-8.el9 finished at Sat, 10 Jan 2026 16:23:59 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.75 seconds
Jan 10 11:23:59 np0005580781 systemd[1]: Finished Cloud-init: Final Stage.
Jan 10 11:23:59 np0005580781 systemd[1]: Reached target Cloud-init target.
Jan 10 11:23:59 np0005580781 dracut[1272]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-655.el9.x86_64kdump.img 5.14.0-655.el9.x86_64
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 10 11:23:59 np0005580781 dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: memstrack is not available
Jan 10 11:24:00 np0005580781 dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 10 11:24:00 np0005580781 dracut[1272]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 10 11:24:01 np0005580781 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 10 11:24:01 np0005580781 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 10 11:24:01 np0005580781 dracut[1272]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 10 11:24:01 np0005580781 dracut[1272]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 10 11:24:01 np0005580781 dracut[1272]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 10 11:24:01 np0005580781 dracut[1272]: memstrack is not available
Jan 10 11:24:01 np0005580781 dracut[1272]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 10 11:24:01 np0005580781 dracut[1272]: *** Including module: systemd ***
Jan 10 11:24:01 np0005580781 chronyd[785]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Jan 10 11:24:01 np0005580781 chronyd[785]: System clock TAI offset set to 37 seconds
Jan 10 11:24:01 np0005580781 dracut[1272]: *** Including module: fips ***
Jan 10 11:24:01 np0005580781 dracut[1272]: *** Including module: systemd-initrd ***
Jan 10 11:24:01 np0005580781 dracut[1272]: *** Including module: i18n ***
Jan 10 11:24:02 np0005580781 dracut[1272]: *** Including module: drm ***
Jan 10 11:24:02 np0005580781 dracut[1272]: *** Including module: prefixdevname ***
Jan 10 11:24:02 np0005580781 dracut[1272]: *** Including module: kernel-modules ***
Jan 10 11:24:02 np0005580781 kernel: block vda: the capability attribute has been deprecated.
Jan 10 11:24:03 np0005580781 chronyd[785]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Jan 10 11:24:03 np0005580781 dracut[1272]: *** Including module: kernel-modules-extra ***
Jan 10 11:24:03 np0005580781 dracut[1272]: *** Including module: qemu ***
Jan 10 11:24:03 np0005580781 dracut[1272]: *** Including module: fstab-sys ***
Jan 10 11:24:03 np0005580781 dracut[1272]: *** Including module: rootfs-block ***
Jan 10 11:24:03 np0005580781 dracut[1272]: *** Including module: terminfo ***
Jan 10 11:24:03 np0005580781 dracut[1272]: *** Including module: udev-rules ***
Jan 10 11:24:04 np0005580781 dracut[1272]: Skipping udev rule: 91-permissions.rules
Jan 10 11:24:04 np0005580781 dracut[1272]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 10 11:24:04 np0005580781 dracut[1272]: *** Including module: virtiofs ***
Jan 10 11:24:04 np0005580781 dracut[1272]: *** Including module: dracut-systemd ***
Jan 10 11:24:04 np0005580781 dracut[1272]: *** Including module: usrmount ***
Jan 10 11:24:04 np0005580781 dracut[1272]: *** Including module: base ***
Jan 10 11:24:04 np0005580781 dracut[1272]: *** Including module: fs-lib ***
Jan 10 11:24:04 np0005580781 dracut[1272]: *** Including module: kdumpbase ***
Jan 10 11:24:05 np0005580781 dracut[1272]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 10 11:24:05 np0005580781 dracut[1272]:  microcode_ctl module: mangling fw_dir
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 10 11:24:05 np0005580781 irqbalance[794]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 10 11:24:05 np0005580781 irqbalance[794]: IRQ 25 affinity is now unmanaged
Jan 10 11:24:05 np0005580781 irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 10 11:24:05 np0005580781 irqbalance[794]: IRQ 31 affinity is now unmanaged
Jan 10 11:24:05 np0005580781 irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 10 11:24:05 np0005580781 irqbalance[794]: IRQ 28 affinity is now unmanaged
Jan 10 11:24:05 np0005580781 irqbalance[794]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 10 11:24:05 np0005580781 irqbalance[794]: IRQ 32 affinity is now unmanaged
Jan 10 11:24:05 np0005580781 irqbalance[794]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 10 11:24:05 np0005580781 irqbalance[794]: IRQ 30 affinity is now unmanaged
Jan 10 11:24:05 np0005580781 irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 10 11:24:05 np0005580781 irqbalance[794]: IRQ 29 affinity is now unmanaged
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 10 11:24:05 np0005580781 dracut[1272]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 10 11:24:05 np0005580781 dracut[1272]: *** Including module: openssl ***
Jan 10 11:24:05 np0005580781 dracut[1272]: *** Including module: shutdown ***
Jan 10 11:24:05 np0005580781 dracut[1272]: *** Including module: squash ***
Jan 10 11:24:05 np0005580781 dracut[1272]: *** Including modules done ***
Jan 10 11:24:05 np0005580781 dracut[1272]: *** Installing kernel module dependencies ***
Jan 10 11:24:06 np0005580781 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 11:24:06 np0005580781 dracut[1272]: *** Installing kernel module dependencies done ***
Jan 10 11:24:06 np0005580781 dracut[1272]: *** Resolving executable dependencies ***
Jan 10 11:24:08 np0005580781 dracut[1272]: *** Resolving executable dependencies done ***
Jan 10 11:24:08 np0005580781 dracut[1272]: *** Generating early-microcode cpio image ***
Jan 10 11:24:08 np0005580781 dracut[1272]: *** Store current command line parameters ***
Jan 10 11:24:08 np0005580781 dracut[1272]: Stored kernel commandline:
Jan 10 11:24:08 np0005580781 dracut[1272]: No dracut internal kernel commandline stored in the initramfs
Jan 10 11:24:08 np0005580781 dracut[1272]: *** Install squash loader ***
Jan 10 11:24:09 np0005580781 dracut[1272]: *** Squashing the files inside the initramfs ***
Jan 10 11:24:10 np0005580781 dracut[1272]: *** Squashing the files inside the initramfs done ***
Jan 10 11:24:10 np0005580781 dracut[1272]: *** Creating image file '/boot/initramfs-5.14.0-655.el9.x86_64kdump.img' ***
Jan 10 11:24:10 np0005580781 dracut[1272]: *** Hardlinking files ***
Jan 10 11:24:10 np0005580781 dracut[1272]: *** Hardlinking files done ***
Jan 10 11:24:11 np0005580781 dracut[1272]: *** Creating initramfs image file '/boot/initramfs-5.14.0-655.el9.x86_64kdump.img' done ***
Jan 10 11:24:11 np0005580781 kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Jan 10 11:24:11 np0005580781 kdumpctl[1020]: kdump: Starting kdump: [OK]
Jan 10 11:24:11 np0005580781 systemd[1]: Finished Crash recovery kernel arming.
Jan 10 11:24:11 np0005580781 systemd[1]: Startup finished in 1.616s (kernel) + 2.673s (initrd) + 17.992s (userspace) = 22.282s.
Jan 10 11:24:15 np0005580781 systemd[1]: Created slice User Slice of UID 1000.
Jan 10 11:24:15 np0005580781 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 10 11:24:15 np0005580781 systemd-logind[798]: New session 1 of user zuul.
Jan 10 11:24:15 np0005580781 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 10 11:24:15 np0005580781 systemd[1]: Starting User Manager for UID 1000...
Jan 10 11:24:16 np0005580781 systemd[4300]: Queued start job for default target Main User Target.
Jan 10 11:24:16 np0005580781 systemd[4300]: Created slice User Application Slice.
Jan 10 11:24:16 np0005580781 systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 10 11:24:16 np0005580781 systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 11:24:16 np0005580781 systemd[4300]: Reached target Paths.
Jan 10 11:24:16 np0005580781 systemd[4300]: Reached target Timers.
Jan 10 11:24:16 np0005580781 systemd[4300]: Starting D-Bus User Message Bus Socket...
Jan 10 11:24:16 np0005580781 systemd[4300]: Starting Create User's Volatile Files and Directories...
Jan 10 11:24:16 np0005580781 systemd[4300]: Listening on D-Bus User Message Bus Socket.
Jan 10 11:24:16 np0005580781 systemd[4300]: Reached target Sockets.
Jan 10 11:24:16 np0005580781 systemd[4300]: Finished Create User's Volatile Files and Directories.
Jan 10 11:24:16 np0005580781 systemd[4300]: Reached target Basic System.
Jan 10 11:24:16 np0005580781 systemd[4300]: Reached target Main User Target.
Jan 10 11:24:16 np0005580781 systemd[4300]: Startup finished in 180ms.
Jan 10 11:24:16 np0005580781 systemd[1]: Started User Manager for UID 1000.
Jan 10 11:24:16 np0005580781 systemd[1]: Started Session 1 of User zuul.
Jan 10 11:24:16 np0005580781 python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:24:19 np0005580781 python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:24:25 np0005580781 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:24:26 np0005580781 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 11:24:26 np0005580781 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 10 11:24:28 np0005580781 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeD5tUo51Lv8h9yEywo+EOfcOe8O9qcfGbBz06qpMofMkfxvR9FJX7HaldGOhwPYwnN8IcRyTV0h847QPaPD8sCQvCeERQFB0o7dWNv+B+pWlIgEkBfmCi8JouOBTrd0NGVq3z7xoWFJCIhDxepjZel40n5uFRbXifZMxjGZrBxLACjQHb8AMbrKf8TYZcYndKFcrlL13N1yC56oCEom41G55ck/7+EGgn0l5uwcGMq1fd8RaeO0ZQltzgUcuE/zaPMv0q2Ei6Ckc2bxrS6VXqXtlQFBfapEZxx0e1ihCKZbdcILoqJKFsm5ufcIXfG6MHTWxmvAx/4z5vq71RgaMB05qVzt519yWHI5FrhDr7CeTtAnPuaLUdyzMYuCcmle5UE3HfdflGVSXEuMjOCQUqF76hnlsJcZW54AtE2ia6dDZ42zqD/T5034uJu3DuFHblXGZt3nABoRwiikk+BWjMR2kKY7OR5kFqysxprgOGlHXMBEIBdkN6WZmUXHMLHjM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:28 np0005580781 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:29 np0005580781 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:24:29 np0005580781 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768062268.8812065-207-207263926591037/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=06074398b50949c395f57300e3d7e828_id_rsa follow=False checksum=133840384c351d4ac55c4317617f39d325dcfaaf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:30 np0005580781 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:24:30 np0005580781 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768062269.8792195-240-170148823908400/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=06074398b50949c395f57300e3d7e828_id_rsa.pub follow=False checksum=218145de1e2a4d006d31f6e8dfc84696a708209c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:32 np0005580781 python3[4972]: ansible-ping Invoked with data=pong
Jan 10 11:24:33 np0005580781 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:24:35 np0005580781 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 10 11:24:36 np0005580781 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:36 np0005580781 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:36 np0005580781 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:37 np0005580781 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:37 np0005580781 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:37 np0005580781 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:39 np0005580781 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:40 np0005580781 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:24:40 np0005580781 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062279.694272-21-144281286314797/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:41 np0005580781 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:41 np0005580781 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:41 np0005580781 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:42 np0005580781 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:42 np0005580781 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:42 np0005580781 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:43 np0005580781 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:43 np0005580781 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:43 np0005580781 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:44 np0005580781 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:44 np0005580781 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:44 np0005580781 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:44 np0005580781 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:45 np0005580781 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:45 np0005580781 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:45 np0005580781 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:46 np0005580781 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:46 np0005580781 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:46 np0005580781 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:47 np0005580781 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:47 np0005580781 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:47 np0005580781 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:47 np0005580781 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:48 np0005580781 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:48 np0005580781 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:48 np0005580781 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:24:51 np0005580781 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 10 11:24:51 np0005580781 systemd[1]: Starting Time & Date Service...
Jan 10 11:24:51 np0005580781 systemd[1]: Started Time & Date Service.
Jan 10 11:24:51 np0005580781 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Jan 10 11:24:51 np0005580781 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:52 np0005580781 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:24:52 np0005580781 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1768062291.7915237-153-210111170487502/source _original_basename=tmp57z7jz8w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:53 np0005580781 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:24:53 np0005580781 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768062292.711432-183-93559023362781/source _original_basename=tmpo3thpkxa follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:54 np0005580781 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:24:54 np0005580781 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768062293.7859862-231-262066641921198/source _original_basename=tmpe4j7s4jn follow=False checksum=6c462e10cf6b935fb22f4386c31d576dcf4d4133 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:55 np0005580781 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:24:55 np0005580781 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:24:55 np0005580781 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:24:56 np0005580781 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062295.5583942-273-160780419477474/source _original_basename=tmpeixxwqmy follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:24:56 np0005580781 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-34c0-10b8-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:24:57 np0005580781 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-34c0-10b8-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 10 11:24:58 np0005580781 python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:25:14 np0005580781 python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:25:21 np0005580781 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 10 11:25:50 np0005580781 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 10 11:25:50 np0005580781 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2539] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 10 11:25:50 np0005580781 systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2714] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2743] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2747] device (eth1): carrier: link connected
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2750] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2757] policy: auto-activating connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874)
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2761] device (eth1): Activation: starting connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874)
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2762] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2765] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2770] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:25:50 np0005580781 NetworkManager[859]: <info>  [1768062350.2775] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:25:51 np0005580781 python3[6971]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-2dfb-ba4b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:26:01 np0005580781 python3[7051]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:26:01 np0005580781 python3[7124]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768062361.144869-102-256905809893115/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=72c6eb85ec6de524f8f776b873377fe42c6f485e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:26:02 np0005580781 python3[7174]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:26:02 np0005580781 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 10 11:26:02 np0005580781 systemd[1]: Stopped Network Manager Wait Online.
Jan 10 11:26:02 np0005580781 systemd[1]: Stopping Network Manager Wait Online...
Jan 10 11:26:02 np0005580781 systemd[1]: Stopping Network Manager...
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.7866] caught SIGTERM, shutting down normally.
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.7876] dhcp4 (eth0): canceled DHCP transaction
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.7877] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.7877] dhcp4 (eth0): state changed no lease
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.7880] manager: NetworkManager state is now CONNECTING
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.8037] dhcp4 (eth1): canceled DHCP transaction
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.8038] dhcp4 (eth1): state changed no lease
Jan 10 11:26:02 np0005580781 NetworkManager[859]: <info>  [1768062362.8107] exiting (success)
Jan 10 11:26:02 np0005580781 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 11:26:02 np0005580781 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 10 11:26:02 np0005580781 systemd[1]: Stopped Network Manager.
Jan 10 11:26:02 np0005580781 systemd[1]: NetworkManager.service: Consumed 1.133s CPU time, 10.0M memory peak.
Jan 10 11:26:02 np0005580781 systemd[1]: Starting Network Manager...
Jan 10 11:26:02 np0005580781 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 11:26:02 np0005580781 NetworkManager[7178]: <info>  [1768062362.8805] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:bad47697-514b-4229-8b29-23921a9a6958)
Jan 10 11:26:02 np0005580781 NetworkManager[7178]: <info>  [1768062362.8808] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 10 11:26:02 np0005580781 NetworkManager[7178]: <info>  [1768062362.8899] manager[0x55ea1b6e0000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 10 11:26:02 np0005580781 systemd[1]: Starting Hostname Service...
Jan 10 11:26:02 np0005580781 systemd[1]: Started Hostname Service.
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0020] hostname: hostname: using hostnamed
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0022] hostname: static hostname changed from (none) to "np0005580781.novalocal"
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0029] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0034] manager[0x55ea1b6e0000]: rfkill: Wi-Fi hardware radio set enabled
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0034] manager[0x55ea1b6e0000]: rfkill: WWAN hardware radio set enabled
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0075] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0076] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0078] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0078] manager: Networking is enabled by state file
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0082] settings: Loaded settings plugin: keyfile (internal)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0088] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0125] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0138] dhcp: init: Using DHCP client 'internal'
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0142] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0149] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0156] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0167] device (lo): Activation: starting connection 'lo' (d627873a-279e-4130-ac7c-6a2872dc6445)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0176] device (eth0): carrier: link connected
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0183] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0190] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0191] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0200] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0210] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0219] device (eth1): carrier: link connected
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0225] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0233] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874) (indicated)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0234] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0241] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0251] device (eth1): Activation: starting connection 'Wired connection 1' (3d2c32e1-e902-3a7a-bfe1-2a4ee0361874)
Jan 10 11:26:03 np0005580781 systemd[1]: Started Network Manager.
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0261] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0267] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0271] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0274] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0277] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0282] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0285] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0290] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0294] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0304] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0309] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0320] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0325] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0344] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0351] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0360] device (lo): Activation: successful, device activated.
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0370] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0380] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 10 11:26:03 np0005580781 systemd[1]: Starting Network Manager Wait Online...
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0459] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0500] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0502] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0507] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0510] device (eth0): Activation: successful, device activated.
Jan 10 11:26:03 np0005580781 NetworkManager[7178]: <info>  [1768062363.0519] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 10 11:26:03 np0005580781 python3[7258]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-2dfb-ba4b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:26:13 np0005580781 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 11:26:33 np0005580781 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.2801] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 11:26:48 np0005580781 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 11:26:48 np0005580781 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3280] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3284] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3295] device (eth1): Activation: successful, device activated.
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3306] manager: startup complete
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3309] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <warn>  [1768062408.3319] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3331] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 10 11:26:48 np0005580781 systemd[1]: Finished Network Manager Wait Online.
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3470] dhcp4 (eth1): canceled DHCP transaction
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3470] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3471] dhcp4 (eth1): state changed no lease
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3493] policy: auto-activating connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3500] device (eth1): Activation: starting connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3501] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3504] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3516] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3530] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3584] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3588] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:26:48 np0005580781 NetworkManager[7178]: <info>  [1768062408.3600] device (eth1): Activation: successful, device activated.
Jan 10 11:26:58 np0005580781 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 11:27:03 np0005580781 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Jan 10 11:27:03 np0005580781 systemd-logind[798]: New session 3 of user zuul.
Jan 10 11:27:03 np0005580781 systemd[1]: Started Session 3 of User zuul.
Jan 10 11:27:03 np0005580781 python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:27:04 np0005580781 python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/ansible-tmp-1768062423.635777-267-210518818811984/source _original_basename=tmp_ghykej1 follow=False checksum=7fba06d2e41938c83d6477fc2dd3f650e30fc2d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:27:06 np0005580781 systemd[1]: session-3.scope: Deactivated successfully.
Jan 10 11:27:06 np0005580781 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Jan 10 11:27:06 np0005580781 systemd-logind[798]: Removed session 3.
Jan 10 11:27:15 np0005580781 systemd[4300]: Starting Mark boot as successful...
Jan 10 11:27:15 np0005580781 systemd[4300]: Finished Mark boot as successful.
Jan 10 11:29:15 np0005580781 systemd[4300]: Created slice User Background Tasks Slice.
Jan 10 11:29:16 np0005580781 systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Jan 10 11:29:16 np0005580781 systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Jan 10 11:31:28 np0005580781 systemd-logind[798]: New session 4 of user zuul.
Jan 10 11:31:28 np0005580781 systemd[1]: Started Session 4 of User zuul.
Jan 10 11:31:29 np0005580781 python3[7506]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-e6de-bd89-000000002159-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:31:29 np0005580781 python3[7534]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:31:29 np0005580781 python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:31:30 np0005580781 python3[7587]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:31:30 np0005580781 python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:31:30 np0005580781 python3[7639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:31:31 np0005580781 python3[7717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:31:31 np0005580781 python3[7790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062691.1167693-487-206991698135714/source _original_basename=tmpxeq2bqjc follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:31:33 np0005580781 python3[7840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 11:31:33 np0005580781 systemd[1]: Reloading.
Jan 10 11:31:33 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:31:34 np0005580781 python3[7897]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 10 11:31:35 np0005580781 irqbalance[794]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 10 11:31:35 np0005580781 irqbalance[794]: IRQ 27 affinity is now unmanaged
Jan 10 11:31:35 np0005580781 python3[7923]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:31:35 np0005580781 python3[7951]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:31:35 np0005580781 python3[7979]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:31:36 np0005580781 python3[8007]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:31:36 np0005580781 python3[8034]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-e6de-bd89-000000002160-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:31:37 np0005580781 python3[8064]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:31:39 np0005580781 systemd[1]: session-4.scope: Deactivated successfully.
Jan 10 11:31:39 np0005580781 systemd[1]: session-4.scope: Consumed 4.197s CPU time.
Jan 10 11:31:39 np0005580781 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Jan 10 11:31:39 np0005580781 systemd-logind[798]: Removed session 4.
Jan 10 11:31:40 np0005580781 systemd-logind[798]: New session 5 of user zuul.
Jan 10 11:31:40 np0005580781 systemd[1]: Started Session 5 of User zuul.
Jan 10 11:31:41 np0005580781 python3[8098]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 11:31:47 np0005580781 setsebool[8140]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 10 11:31:47 np0005580781 setsebool[8140]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 10 11:31:58 np0005580781 kernel: SELinux:  Converting 385 SID table entries...
Jan 10 11:31:58 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 11:31:58 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 11:31:58 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 11:31:58 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 11:31:58 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 11:31:58 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 11:31:58 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 11:32:07 np0005580781 kernel: SELinux:  Converting 388 SID table entries...
Jan 10 11:32:07 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 11:32:07 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 11:32:07 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 11:32:07 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 11:32:07 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 11:32:07 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 11:32:07 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 11:32:25 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 10 11:32:25 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 11:32:25 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 11:32:25 np0005580781 systemd[1]: Reloading.
Jan 10 11:32:25 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:32:26 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 11:32:41 np0005580781 python3[17172]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-0026-8d85-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:32:41 np0005580781 kernel: evm: overlay not supported
Jan 10 11:32:42 np0005580781 systemd[4300]: Starting D-Bus User Message Bus...
Jan 10 11:32:42 np0005580781 dbus-broker-launch[17590]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 10 11:32:42 np0005580781 dbus-broker-launch[17590]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 10 11:32:42 np0005580781 systemd[4300]: Started D-Bus User Message Bus.
Jan 10 11:32:42 np0005580781 dbus-broker-lau[17590]: Ready
Jan 10 11:32:42 np0005580781 systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 10 11:32:42 np0005580781 systemd[4300]: Created slice Slice /user.
Jan 10 11:32:42 np0005580781 systemd[4300]: podman-17519.scope: unit configures an IP firewall, but not running as root.
Jan 10 11:32:42 np0005580781 systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Jan 10 11:32:42 np0005580781 systemd[4300]: Started podman-17519.scope.
Jan 10 11:32:42 np0005580781 systemd[4300]: Started podman-pause-947aeabd.scope.
Jan 10 11:32:42 np0005580781 python3[17970]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.73:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.73:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:32:42 np0005580781 python3[17970]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 10 11:32:43 np0005580781 systemd[1]: session-5.scope: Deactivated successfully.
Jan 10 11:32:43 np0005580781 systemd[1]: session-5.scope: Consumed 42.610s CPU time.
Jan 10 11:32:43 np0005580781 systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Jan 10 11:32:43 np0005580781 systemd-logind[798]: Removed session 5.
Jan 10 11:33:06 np0005580781 systemd-logind[798]: New session 6 of user zuul.
Jan 10 11:33:06 np0005580781 systemd[1]: Started Session 6 of User zuul.
Jan 10 11:33:07 np0005580781 python3[26783]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOXnVAKH7weFkA5GtYbIuGsCkG349Pr6AZv5lMmMSI/AqOyyURLjrTZmhQphTCn8tonuqqdfNaoJoZXEKGDKaRA= zuul@np0005580780.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:33:07 np0005580781 python3[26921]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOXnVAKH7weFkA5GtYbIuGsCkG349Pr6AZv5lMmMSI/AqOyyURLjrTZmhQphTCn8tonuqqdfNaoJoZXEKGDKaRA= zuul@np0005580780.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:33:08 np0005580781 python3[27308]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005580781.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 10 11:33:08 np0005580781 python3[27497]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOXnVAKH7weFkA5GtYbIuGsCkG349Pr6AZv5lMmMSI/AqOyyURLjrTZmhQphTCn8tonuqqdfNaoJoZXEKGDKaRA= zuul@np0005580780.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 10 11:33:09 np0005580781 python3[27736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:33:09 np0005580781 python3[27979]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768062789.0827332-135-99394940406518/source _original_basename=tmpbrzuoljp follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:33:10 np0005580781 python3[28253]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 10 11:33:10 np0005580781 systemd[1]: Starting Hostname Service...
Jan 10 11:33:10 np0005580781 systemd[1]: Started Hostname Service.
Jan 10 11:33:10 np0005580781 systemd-hostnamed[28356]: Changed pretty hostname to 'compute-0'
Jan 10 11:33:10 np0005580781 systemd-hostnamed[28356]: Hostname set to <compute-0> (static)
Jan 10 11:33:10 np0005580781 NetworkManager[7178]: <info>  [1768062790.9354] hostname: static hostname changed from "np0005580781.novalocal" to "compute-0"
Jan 10 11:33:10 np0005580781 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 11:33:10 np0005580781 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 11:33:11 np0005580781 systemd[1]: session-6.scope: Deactivated successfully.
Jan 10 11:33:11 np0005580781 systemd[1]: session-6.scope: Consumed 2.503s CPU time.
Jan 10 11:33:11 np0005580781 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Jan 10 11:33:11 np0005580781 systemd-logind[798]: Removed session 6.
Jan 10 11:33:15 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 11:33:15 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 11:33:15 np0005580781 systemd[1]: man-db-cache-update.service: Consumed 1min 91ms CPU time.
Jan 10 11:33:15 np0005580781 systemd[1]: run-r91412d436caa4e05bb22e9aedcd8ad7b.service: Deactivated successfully.
Jan 10 11:33:20 np0005580781 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 11:33:40 np0005580781 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 11:36:58 np0005580781 systemd-logind[798]: New session 7 of user zuul.
Jan 10 11:36:58 np0005580781 systemd[1]: Started Session 7 of User zuul.
Jan 10 11:36:58 np0005580781 python3[30055]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:37:00 np0005580781 python3[30171]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:37:00 np0005580781 python3[30244]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:37:01 np0005580781 python3[30270]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:37:01 np0005580781 python3[30343]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:37:01 np0005580781 python3[30369]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:37:02 np0005580781 python3[30442]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:37:02 np0005580781 python3[30468]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:37:02 np0005580781 python3[30541]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:37:02 np0005580781 python3[30567]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:37:03 np0005580781 python3[30640]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:37:03 np0005580781 python3[30666]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:37:04 np0005580781 python3[30739]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:37:04 np0005580781 python3[30765]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:37:04 np0005580781 python3[30838]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768063020.0407746-33549-56292853264439/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:37:16 np0005580781 python3[30896]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:39:05 np0005580781 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 10 11:39:06 np0005580781 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 10 11:39:06 np0005580781 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 10 11:39:06 np0005580781 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 10 11:42:16 np0005580781 systemd[1]: session-7.scope: Deactivated successfully.
Jan 10 11:42:16 np0005580781 systemd[1]: session-7.scope: Consumed 5.276s CPU time.
Jan 10 11:42:16 np0005580781 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Jan 10 11:42:16 np0005580781 systemd-logind[798]: Removed session 7.
Jan 10 11:48:21 np0005580781 systemd-logind[798]: New session 8 of user zuul.
Jan 10 11:48:21 np0005580781 systemd[1]: Started Session 8 of User zuul.
Jan 10 11:48:22 np0005580781 python3.9[31145]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:48:24 np0005580781 python3.9[31326]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:48:32 np0005580781 systemd[1]: session-8.scope: Deactivated successfully.
Jan 10 11:48:32 np0005580781 systemd[1]: session-8.scope: Consumed 8.792s CPU time.
Jan 10 11:48:32 np0005580781 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Jan 10 11:48:32 np0005580781 systemd-logind[798]: Removed session 8.
Jan 10 11:48:47 np0005580781 systemd-logind[798]: New session 9 of user zuul.
Jan 10 11:48:47 np0005580781 systemd[1]: Started Session 9 of User zuul.
Jan 10 11:48:48 np0005580781 python3.9[31537]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 10 11:48:49 np0005580781 python3.9[31711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:48:50 np0005580781 python3.9[31863]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:48:51 np0005580781 python3.9[32016]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:48:52 np0005580781 python3.9[32168]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:48:53 np0005580781 python3.9[32320]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:48:54 np0005580781 python3.9[32443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063733.058479-68-19246270801804/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:48:55 np0005580781 python3.9[32595]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:48:55 np0005580781 irqbalance[794]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 10 11:48:55 np0005580781 irqbalance[794]: IRQ 26 affinity is now unmanaged
Jan 10 11:48:55 np0005580781 python3.9[32751]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:48:56 np0005580781 python3.9[32903]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:48:57 np0005580781 python3.9[33053]: ansible-ansible.builtin.service_facts Invoked
Jan 10 11:49:00 np0005580781 python3.9[33306]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:49:01 np0005580781 python3.9[33456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:49:02 np0005580781 python3.9[33610]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:49:04 np0005580781 python3.9[33768]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:49:04 np0005580781 python3.9[33852]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:49:56 np0005580781 systemd[1]: Reloading.
Jan 10 11:49:56 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:49:56 np0005580781 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 10 11:49:57 np0005580781 systemd[1]: Reloading.
Jan 10 11:49:57 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:49:57 np0005580781 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 10 11:49:57 np0005580781 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 10 11:49:57 np0005580781 systemd[1]: Reloading.
Jan 10 11:49:57 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:49:57 np0005580781 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 10 11:49:57 np0005580781 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 11:49:57 np0005580781 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 11:49:57 np0005580781 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 11:51:05 np0005580781 kernel: SELinux:  Converting 2719 SID table entries...
Jan 10 11:51:05 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 11:51:05 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 11:51:05 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 11:51:05 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 11:51:05 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 11:51:05 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 11:51:05 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 11:51:05 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 10 11:51:05 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 11:51:05 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 11:51:05 np0005580781 systemd[1]: Reloading.
Jan 10 11:51:05 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:51:05 np0005580781 systemd[1]: Starting dnf makecache...
Jan 10 11:51:05 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 11:51:05 np0005580781 dnf[34508]: Failed determining last makecache time.
Jan 10 11:51:05 np0005580781 dnf[34508]: delorean-openstack-barbican-42b4c41831408a8e323 120 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 175 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-cinder-1c00d6490d88e436f26ef 162 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-python-stevedore-c4acc5639fd2329372142 176 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-python-cloudkitty-tests-tempest-2c80f8 178 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-os-refresh-config-9bfc52b5049be2d8de61 169 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 182 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-python-designate-tests-tempest-347fdbc 173 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-glance-1fd12c29b339f30fe823e 194 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 193 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-manila-3c01b7181572c95dac462 168 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-python-whitebox-neutron-tests-tempest- 155 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-octavia-ba397f07a7331190208c 168 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-watcher-c014f81a8647287f6dcc 176 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-ansible-config_template-5ccaa22121a7ff 180 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 178 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-swift-dc98a8463506ac520c469a 186 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-python-tempestconf-8515371b7cceebd4282 163 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: delorean-openstack-heat-ui-013accbfd179753bc3f0 173 kB/s | 3.0 kB     00:00
Jan 10 11:51:06 np0005580781 dnf[34508]: CentOS Stream 9 - BaseOS                         68 kB/s | 6.7 kB     00:00
Jan 10 11:51:06 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 11:51:06 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 11:51:06 np0005580781 systemd[1]: man-db-cache-update.service: Consumed 1.317s CPU time.
Jan 10 11:51:06 np0005580781 systemd[1]: run-rd1f3c2c596344fefa723795417c7e0da.service: Deactivated successfully.
Jan 10 11:51:06 np0005580781 dnf[34508]: CentOS Stream 9 - AppStream                      30 kB/s | 6.8 kB     00:00
Jan 10 11:51:06 np0005580781 python3.9[35409]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:51:06 np0005580781 dnf[34508]: CentOS Stream 9 - CRB                            65 kB/s | 6.6 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: CentOS Stream 9 - Extras packages                74 kB/s | 7.3 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: dlrn-antelope-testing                           130 kB/s | 3.0 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: dlrn-antelope-build-deps                        149 kB/s | 3.0 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: centos9-rabbitmq                                110 kB/s | 3.0 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: centos9-storage                                 107 kB/s | 3.0 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: centos9-opstools                                101 kB/s | 3.0 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: NFV SIG OpenvSwitch                             100 kB/s | 3.0 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: repo-setup-centos-appstream                     153 kB/s | 4.4 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: repo-setup-centos-baseos                        151 kB/s | 3.9 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: repo-setup-centos-highavailability              164 kB/s | 3.9 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: repo-setup-centos-powertools                    197 kB/s | 4.3 kB     00:00
Jan 10 11:51:07 np0005580781 dnf[34508]: Extra Packages for Enterprise Linux 9 - x86_64  224 kB/s |  31 kB     00:00
Jan 10 11:51:08 np0005580781 dnf[34508]: Metadata cache created.
Jan 10 11:51:08 np0005580781 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 10 11:51:08 np0005580781 systemd[1]: Finished dnf makecache.
Jan 10 11:51:08 np0005580781 systemd[1]: dnf-makecache.service: Consumed 1.968s CPU time.
Jan 10 11:51:08 np0005580781 python3.9[35711]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 10 11:51:09 np0005580781 python3.9[35863]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 10 11:51:13 np0005580781 python3.9[36016]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:51:14 np0005580781 python3.9[36168]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 10 11:51:15 np0005580781 python3.9[36320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:51:16 np0005580781 python3.9[36472]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:51:16 np0005580781 python3.9[36595]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768063875.6270247-231-78958342477032/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:51:20 np0005580781 python3.9[36747]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:51:24 np0005580781 python3.9[36899]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:51:25 np0005580781 python3.9[37054]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:51:26 np0005580781 python3.9[37206]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 10 11:51:26 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 11:51:27 np0005580781 python3.9[37360]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 11:51:28 np0005580781 python3.9[37518]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 11:51:29 np0005580781 python3.9[37678]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 10 11:51:29 np0005580781 python3.9[37831]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 11:51:30 np0005580781 python3.9[37989]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 10 11:51:31 np0005580781 python3.9[38141]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:51:35 np0005580781 python3.9[38294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:51:35 np0005580781 python3.9[38446]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:51:36 np0005580781 python3.9[38569]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768063895.336047-350-241734069473998/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:51:37 np0005580781 python3.9[38721]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:51:37 np0005580781 systemd[1]: Starting Load Kernel Modules...
Jan 10 11:51:37 np0005580781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 10 11:51:37 np0005580781 kernel: Bridge firewalling registered
Jan 10 11:51:37 np0005580781 systemd-modules-load[38725]: Inserted module 'br_netfilter'
Jan 10 11:51:37 np0005580781 systemd[1]: Finished Load Kernel Modules.
Jan 10 11:51:38 np0005580781 python3.9[38881]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:51:39 np0005580781 python3.9[39004]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768063898.0593152-373-226241623176004/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:51:40 np0005580781 python3.9[39156]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:51:43 np0005580781 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 11:51:43 np0005580781 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 11:51:43 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 11:51:43 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 11:51:43 np0005580781 systemd[1]: Reloading.
Jan 10 11:51:43 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:51:43 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 11:51:45 np0005580781 python3.9[40336]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:51:46 np0005580781 python3.9[41201]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 10 11:51:46 np0005580781 python3.9[41914]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:51:47 np0005580781 python3.9[42810]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:51:47 np0005580781 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 10 11:51:48 np0005580781 systemd[1]: Starting Authorization Manager...
Jan 10 11:51:48 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 11:51:48 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 11:51:48 np0005580781 systemd[1]: man-db-cache-update.service: Consumed 5.629s CPU time.
Jan 10 11:51:48 np0005580781 systemd[1]: run-r75795725a524488086aae489f1efef2c.service: Deactivated successfully.
Jan 10 11:51:48 np0005580781 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 10 11:51:48 np0005580781 polkitd[43532]: Started polkitd version 0.117
Jan 10 11:51:48 np0005580781 systemd[1]: Started Authorization Manager.
Jan 10 11:51:48 np0005580781 python3.9[43703]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:51:49 np0005580781 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 10 11:51:49 np0005580781 systemd[1]: tuned.service: Deactivated successfully.
Jan 10 11:51:49 np0005580781 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 10 11:51:49 np0005580781 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 10 11:51:49 np0005580781 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 10 11:51:49 np0005580781 python3.9[43865]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 10 11:51:52 np0005580781 python3.9[44017]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:51:52 np0005580781 systemd[1]: Reloading.
Jan 10 11:51:52 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:51:53 np0005580781 python3.9[44207]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:51:53 np0005580781 systemd[1]: Reloading.
Jan 10 11:51:53 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:51:54 np0005580781 python3.9[44396]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:51:54 np0005580781 python3.9[44549]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:51:54 np0005580781 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 10 11:51:55 np0005580781 python3.9[44702]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:51:57 np0005580781 python3.9[44864]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:51:58 np0005580781 python3.9[45017]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:51:58 np0005580781 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 10 11:51:58 np0005580781 systemd[1]: Stopped Apply Kernel Variables.
Jan 10 11:51:58 np0005580781 systemd[1]: Stopping Apply Kernel Variables...
Jan 10 11:51:58 np0005580781 systemd[1]: Starting Apply Kernel Variables...
Jan 10 11:51:58 np0005580781 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 10 11:51:58 np0005580781 systemd[1]: Finished Apply Kernel Variables.
Jan 10 11:51:59 np0005580781 systemd[1]: session-9.scope: Deactivated successfully.
Jan 10 11:51:59 np0005580781 systemd[1]: session-9.scope: Consumed 2min 31.344s CPU time.
Jan 10 11:51:59 np0005580781 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Jan 10 11:51:59 np0005580781 systemd-logind[798]: Removed session 9.
Jan 10 11:52:05 np0005580781 systemd-logind[798]: New session 10 of user zuul.
Jan 10 11:52:05 np0005580781 systemd[1]: Started Session 10 of User zuul.
Jan 10 11:52:06 np0005580781 python3.9[45200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:52:07 np0005580781 python3.9[45356]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 10 11:52:08 np0005580781 python3.9[45509]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 11:52:09 np0005580781 python3.9[45667]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 11:52:10 np0005580781 python3.9[45827]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:52:11 np0005580781 python3.9[45912]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 11:52:13 np0005580781 python3.9[46075]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:52:25 np0005580781 kernel: SELinux:  Converting 2732 SID table entries...
Jan 10 11:52:25 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 11:52:25 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 11:52:25 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 11:52:25 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 11:52:25 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 11:52:25 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 11:52:25 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 11:52:26 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 10 11:52:26 np0005580781 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 10 11:52:28 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 11:52:28 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 11:52:28 np0005580781 systemd[1]: Reloading.
Jan 10 11:52:28 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:52:28 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:52:28 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 11:52:28 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 11:52:28 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 11:52:29 np0005580781 systemd[1]: run-r93b3b86b0dd04336994bcf814f486156.service: Deactivated successfully.
Jan 10 11:52:30 np0005580781 python3.9[47174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 11:52:30 np0005580781 systemd[1]: Reloading.
Jan 10 11:52:30 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:52:30 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:52:30 np0005580781 systemd[1]: Starting Open vSwitch Database Unit...
Jan 10 11:52:30 np0005580781 chown[47216]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 10 11:52:30 np0005580781 ovs-ctl[47221]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 10 11:52:30 np0005580781 ovs-ctl[47221]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 10 11:52:30 np0005580781 ovs-ctl[47221]: Starting ovsdb-server [  OK  ]
Jan 10 11:52:30 np0005580781 ovs-vsctl[47270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 10 11:52:30 np0005580781 ovs-vsctl[47290]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"fbd04e21-7be2-4eb3-a385-03f0bb540a40\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 10 11:52:30 np0005580781 ovs-ctl[47221]: Configuring Open vSwitch system IDs [  OK  ]
Jan 10 11:52:30 np0005580781 ovs-ctl[47221]: Enabling remote OVSDB managers [  OK  ]
Jan 10 11:52:30 np0005580781 systemd[1]: Started Open vSwitch Database Unit.
Jan 10 11:52:30 np0005580781 ovs-vsctl[47296]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 10 11:52:30 np0005580781 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 10 11:52:30 np0005580781 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 10 11:52:30 np0005580781 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 10 11:52:30 np0005580781 kernel: openvswitch: Open vSwitch switching datapath
Jan 10 11:52:30 np0005580781 ovs-ctl[47341]: Inserting openvswitch module [  OK  ]
Jan 10 11:52:30 np0005580781 ovs-ctl[47310]: Starting ovs-vswitchd [  OK  ]
Jan 10 11:52:30 np0005580781 ovs-vsctl[47358]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 10 11:52:30 np0005580781 ovs-ctl[47310]: Enabling remote OVSDB managers [  OK  ]
Jan 10 11:52:30 np0005580781 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 10 11:52:30 np0005580781 systemd[1]: Starting Open vSwitch...
Jan 10 11:52:30 np0005580781 systemd[1]: Finished Open vSwitch.
Jan 10 11:52:31 np0005580781 python3.9[47510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:52:32 np0005580781 python3.9[47662]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 10 11:52:34 np0005580781 kernel: SELinux:  Converting 2746 SID table entries...
Jan 10 11:52:34 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 11:52:34 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 11:52:34 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 11:52:34 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 11:52:34 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 11:52:34 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 11:52:34 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 11:52:35 np0005580781 python3.9[47818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:52:35 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 10 11:52:36 np0005580781 python3.9[47976]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:52:38 np0005580781 python3.9[48131]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:52:40 np0005580781 python3.9[48418]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 10 11:52:40 np0005580781 python3.9[48568]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:52:41 np0005580781 python3.9[48722]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:52:43 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 11:52:43 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 11:52:43 np0005580781 systemd[1]: Reloading.
Jan 10 11:52:43 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:52:43 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:52:43 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 11:52:44 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 11:52:44 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 11:52:44 np0005580781 systemd[1]: run-r9d21fc0d50ca465b86a5967bc2a62f3c.service: Deactivated successfully.
Jan 10 11:52:45 np0005580781 python3.9[49038]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:52:45 np0005580781 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 10 11:52:45 np0005580781 systemd[1]: Stopped Network Manager Wait Online.
Jan 10 11:52:45 np0005580781 systemd[1]: Stopping Network Manager Wait Online...
Jan 10 11:52:45 np0005580781 systemd[1]: Stopping Network Manager...
Jan 10 11:52:45 np0005580781 NetworkManager[7178]: <info>  [1768063965.1525] caught SIGTERM, shutting down normally.
Jan 10 11:52:45 np0005580781 NetworkManager[7178]: <info>  [1768063965.1539] dhcp4 (eth0): canceled DHCP transaction
Jan 10 11:52:45 np0005580781 NetworkManager[7178]: <info>  [1768063965.1540] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:52:45 np0005580781 NetworkManager[7178]: <info>  [1768063965.1540] dhcp4 (eth0): state changed no lease
Jan 10 11:52:45 np0005580781 NetworkManager[7178]: <info>  [1768063965.1542] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 11:52:45 np0005580781 NetworkManager[7178]: <info>  [1768063965.1610] exiting (success)
Jan 10 11:52:45 np0005580781 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 11:52:45 np0005580781 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 10 11:52:45 np0005580781 systemd[1]: Stopped Network Manager.
Jan 10 11:52:45 np0005580781 systemd[1]: NetworkManager.service: Consumed 13.559s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Jan 10 11:52:45 np0005580781 systemd[1]: Starting Network Manager...
Jan 10 11:52:45 np0005580781 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.2206] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:bad47697-514b-4229-8b29-23921a9a6958)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.2209] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.2271] manager[0x5577ecce3000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 10 11:52:45 np0005580781 systemd[1]: Starting Hostname Service...
Jan 10 11:52:45 np0005580781 systemd[1]: Started Hostname Service.
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3067] hostname: hostname: using hostnamed
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3068] hostname: static hostname changed from (none) to "compute-0"
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3073] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3078] manager[0x5577ecce3000]: rfkill: Wi-Fi hardware radio set enabled
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3079] manager[0x5577ecce3000]: rfkill: WWAN hardware radio set enabled
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3098] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3108] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3109] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3110] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3111] manager: Networking is enabled by state file
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3114] settings: Loaded settings plugin: keyfile (internal)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3118] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3140] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3149] dhcp: init: Using DHCP client 'internal'
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3152] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3157] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3163] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3169] device (lo): Activation: starting connection 'lo' (d627873a-279e-4130-ac7c-6a2872dc6445)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3175] device (eth0): carrier: link connected
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3179] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3184] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3185] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3191] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3197] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3202] device (eth1): carrier: link connected
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3206] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3211] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b) (indicated)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3212] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3216] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3222] device (eth1): Activation: starting connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3228] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 10 11:52:45 np0005580781 systemd[1]: Started Network Manager.
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3235] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3246] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3250] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3252] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3256] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3258] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3262] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3266] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3273] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3277] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3286] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3303] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3312] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3315] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3317] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3322] device (lo): Activation: successful, device activated.
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3335] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 10 11:52:45 np0005580781 systemd[1]: Starting Network Manager Wait Online...
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3397] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3403] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3405] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3408] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3412] device (eth1): Activation: successful, device activated.
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3425] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3427] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3430] manager: NetworkManager state is now CONNECTED_SITE
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3432] device (eth0): Activation: successful, device activated.
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3435] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 10 11:52:45 np0005580781 NetworkManager[49047]: <info>  [1768063965.3437] manager: startup complete
Jan 10 11:52:45 np0005580781 systemd[1]: Finished Network Manager Wait Online.
Jan 10 11:52:46 np0005580781 python3.9[49265]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:52:50 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 11:52:50 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 11:52:50 np0005580781 systemd[1]: Reloading.
Jan 10 11:52:51 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:52:51 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:52:51 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 11:52:51 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 11:52:51 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 11:52:51 np0005580781 systemd[1]: run-ra68a682dcca1494baebaa7549057944c.service: Deactivated successfully.
Jan 10 11:52:53 np0005580781 python3.9[49728]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:52:53 np0005580781 python3.9[49880]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:52:54 np0005580781 python3.9[50034]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:52:55 np0005580781 python3.9[50186]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:52:55 np0005580781 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 11:52:56 np0005580781 python3.9[50338]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:52:56 np0005580781 python3.9[50490]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:52:57 np0005580781 python3.9[50642]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:52:58 np0005580781 python3.9[50765]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063976.8513954-224-50568702282763/.source _original_basename=.5uer7pom follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:52:58 np0005580781 python3.9[50917]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:52:59 np0005580781 python3.9[51069]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 10 11:53:00 np0005580781 python3.9[51221]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:53:02 np0005580781 python3.9[51648]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 10 11:53:03 np0005580781 ansible-async_wrapper.py[51823]: Invoked with j968501777284 300 /home/zuul/.ansible/tmp/ansible-tmp-1768063982.8245974-290-35122400301436/AnsiballZ_edpm_os_net_config.py _
Jan 10 11:53:03 np0005580781 ansible-async_wrapper.py[51826]: Starting module and watcher
Jan 10 11:53:03 np0005580781 ansible-async_wrapper.py[51826]: Start watching 51827 (300)
Jan 10 11:53:03 np0005580781 ansible-async_wrapper.py[51827]: Start module (51827)
Jan 10 11:53:03 np0005580781 ansible-async_wrapper.py[51823]: Return async_wrapper task started.
Jan 10 11:53:03 np0005580781 python3.9[51828]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 10 11:53:04 np0005580781 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 10 11:53:04 np0005580781 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 10 11:53:04 np0005580781 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 10 11:53:04 np0005580781 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 10 11:53:04 np0005580781 kernel: cfg80211: failed to load regulatory.db
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.0664] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.0685] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1452] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1454] audit: op="connection-add" uuid="23749045-7eb7-469c-8025-95dce7f6a3d3" name="br-ex-br" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1475] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1477] audit: op="connection-add" uuid="572c521b-ef1e-44c2-9e45-2cca971e91bc" name="br-ex-port" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1491] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1493] audit: op="connection-add" uuid="73d0cb0f-8899-4e89-8c0c-62c877854379" name="eth1-port" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1506] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1508] audit: op="connection-add" uuid="3557aafa-0c7c-47f7-8a26-0de2cb1d26f1" name="vlan20-port" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1521] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1523] audit: op="connection-add" uuid="7e53a335-ecc6-41f9-a8f8-655a8fc57559" name="vlan21-port" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1535] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1536] audit: op="connection-add" uuid="4108fd7f-3d23-4e29-bd33-50b051eb3de2" name="vlan22-port" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1549] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1551] audit: op="connection-add" uuid="726bf054-d501-4702-b1b5-e5edf3bf45dd" name="vlan23-port" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1574] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1593] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1595] audit: op="connection-add" uuid="a87f6957-10a3-478e-ab01-106b44cf4872" name="br-ex-if" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1657] audit: op="connection-update" uuid="91161bbf-f289-5cf0-9a28-a3cd6f92331b" name="ci-private-network" args="ovs-interface.type,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.dns,ipv4.never-default,ipv4.routing-rules,connection.master,connection.timestamp,connection.slave-type,connection.port-type,connection.controller,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.dns,ipv6.routing-rules,ovs-external-ids.data" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1677] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1679] audit: op="connection-add" uuid="4383ba21-56a7-4018-aef3-ad454d1194e3" name="vlan20-if" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1698] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1700] audit: op="connection-add" uuid="f9d660d7-3e79-4f86-92b3-5afd2ef4ef22" name="vlan21-if" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1719] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1722] audit: op="connection-add" uuid="489eb104-a1ff-4310-adab-54b2c3517112" name="vlan22-if" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1742] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1744] audit: op="connection-add" uuid="82fb7275-a56a-46b9-975d-13d2400166d7" name="vlan23-if" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1760] audit: op="connection-delete" uuid="3d2c32e1-e902-3a7a-bfe1-2a4ee0361874" name="Wired connection 1" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1775] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1778] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1785] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1790] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (23749045-7eb7-469c-8025-95dce7f6a3d3)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1790] audit: op="connection-activate" uuid="23749045-7eb7-469c-8025-95dce7f6a3d3" name="br-ex-br" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1792] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1793] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1800] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1804] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (572c521b-ef1e-44c2-9e45-2cca971e91bc)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1806] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1807] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1812] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1816] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (73d0cb0f-8899-4e89-8c0c-62c877854379)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1818] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1819] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1824] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1829] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (3557aafa-0c7c-47f7-8a26-0de2cb1d26f1)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1831] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1832] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1837] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1842] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7e53a335-ecc6-41f9-a8f8-655a8fc57559)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1844] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1845] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1850] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1855] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4108fd7f-3d23-4e29-bd33-50b051eb3de2)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1857] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1858] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1864] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1869] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (726bf054-d501-4702-b1b5-e5edf3bf45dd)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1869] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1872] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1874] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1881] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1882] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1886] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1891] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a87f6957-10a3-478e-ab01-106b44cf4872)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1892] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1895] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1897] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1899] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1900] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1912] device (eth1): disconnecting for new activation request.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1913] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1917] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1919] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1921] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1924] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1925] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1928] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1933] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4383ba21-56a7-4018-aef3-ad454d1194e3)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1934] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1937] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1939] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1940] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1943] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1944] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1948] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1952] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (f9d660d7-3e79-4f86-92b3-5afd2ef4ef22)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1954] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1957] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1959] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1961] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1964] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1965] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1968] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1974] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (489eb104-a1ff-4310-adab-54b2c3517112)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1974] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1978] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1980] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1982] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1985] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <warn>  [1768063986.1986] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1990] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1995] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (82fb7275-a56a-46b9-975d-13d2400166d7)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.1996] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2002] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2004] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2005] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2008] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2021] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2023] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2026] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2028] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2036] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2040] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2044] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2048] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2050] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2056] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2060] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2064] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2066] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 kernel: ovs-system: entered promiscuous mode
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2071] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2074] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2078] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2080] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2084] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2088] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2092] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2094] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2099] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2104] dhcp4 (eth0): canceled DHCP transaction
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2104] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2105] dhcp4 (eth0): state changed no lease
Jan 10 11:53:06 np0005580781 kernel: Timeout policy base is empty
Jan 10 11:53:06 np0005580781 systemd-udevd[51837]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2106] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2120] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2127] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51829 uid=0 result="fail" reason="Device is not activated"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2131] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2137] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2174] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2178] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2220] device (eth1): disconnecting for new activation request.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2221] audit: op="connection-activate" uuid="91161bbf-f289-5cf0-9a28-a3cd6f92331b" name="ci-private-network" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2243] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2256] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 11:53:06 np0005580781 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2399] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2577] device (eth1): Activation: starting connection 'ci-private-network' (91161bbf-f289-5cf0-9a28-a3cd6f92331b)
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2602] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2609] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2619] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2620] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2622] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2623] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2624] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2626] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2627] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2632] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2638] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2642] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2645] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2651] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2655] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2659] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2662] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2665] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2670] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2673] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2678] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2681] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2686] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2690] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2696] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2702] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2753] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2756] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.2764] device (eth1): Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 kernel: br-ex: entered promiscuous mode
Jan 10 11:53:06 np0005580781 kernel: vlan22: entered promiscuous mode
Jan 10 11:53:06 np0005580781 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 10 11:53:06 np0005580781 systemd-udevd[51836]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 11:53:06 np0005580781 kernel: vlan23: entered promiscuous mode
Jan 10 11:53:06 np0005580781 systemd-udevd[51835]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3093] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3104] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3131] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3133] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3139] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 kernel: vlan20: entered promiscuous mode
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3181] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3188] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 10 11:53:06 np0005580781 systemd-udevd[51950]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3207] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3214] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3244] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3246] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3247] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3251] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3255] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 kernel: vlan21: entered promiscuous mode
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3258] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3309] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3317] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3336] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3337] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3341] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3417] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3426] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3453] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3454] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 10 11:53:06 np0005580781 NetworkManager[49047]: <info>  [1768063986.3458] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 10 11:53:07 np0005580781 NetworkManager[49047]: <info>  [1768063987.4379] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 11:53:07 np0005580781 NetworkManager[49047]: <info>  [1768063987.6821] checkpoint[0x5577eccb8950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 10 11:53:07 np0005580781 NetworkManager[49047]: <info>  [1768063987.6823] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51829 uid=0 result="success"
Jan 10 11:53:07 np0005580781 python3.9[52191]: ansible-ansible.legacy.async_status Invoked with jid=j968501777284.51823 mode=status _async_dir=/root/.ansible_async
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.0426] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.0442] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.3075] audit: op="networking-control" arg="global-dns-configuration" pid=51829 uid=0 result="success"
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.3120] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.3162] audit: op="networking-control" arg="global-dns-configuration" pid=51829 uid=0 result="success"
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.3186] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.4777] checkpoint[0x5577eccb8a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 10 11:53:08 np0005580781 NetworkManager[49047]: <info>  [1768063988.4783] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51829 uid=0 result="success"
Jan 10 11:53:08 np0005580781 ansible-async_wrapper.py[51827]: Module complete (51827)
Jan 10 11:53:08 np0005580781 ansible-async_wrapper.py[51826]: Done in kid B.
Jan 10 11:53:11 np0005580781 python3.9[52298]: ansible-ansible.legacy.async_status Invoked with jid=j968501777284.51823 mode=status _async_dir=/root/.ansible_async
Jan 10 11:53:11 np0005580781 python3.9[52397]: ansible-ansible.legacy.async_status Invoked with jid=j968501777284.51823 mode=cleanup _async_dir=/root/.ansible_async
Jan 10 11:53:12 np0005580781 python3.9[52549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:53:12 np0005580781 python3.9[52672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063991.9172146-317-4528830345523/.source.returncode _original_basename=.evysxujx follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:53:13 np0005580781 python3.9[52824]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:53:14 np0005580781 python3.9[52947]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768063993.152727-333-278884552190980/.source.cfg _original_basename=.jhh3vdjf follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:53:14 np0005580781 python3.9[53100]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:53:15 np0005580781 systemd[1]: Reloading Network Manager...
Jan 10 11:53:15 np0005580781 NetworkManager[49047]: <info>  [1768063995.0304] audit: op="reload" arg="0" pid=53104 uid=0 result="success"
Jan 10 11:53:15 np0005580781 NetworkManager[49047]: <info>  [1768063995.0314] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 10 11:53:15 np0005580781 systemd[1]: Reloaded Network Manager.
Jan 10 11:53:15 np0005580781 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 10 11:53:15 np0005580781 systemd[1]: session-10.scope: Deactivated successfully.
Jan 10 11:53:15 np0005580781 systemd[1]: session-10.scope: Consumed 53.013s CPU time.
Jan 10 11:53:15 np0005580781 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Jan 10 11:53:15 np0005580781 systemd-logind[798]: Removed session 10.
Jan 10 11:53:21 np0005580781 systemd-logind[798]: New session 11 of user zuul.
Jan 10 11:53:21 np0005580781 systemd[1]: Started Session 11 of User zuul.
Jan 10 11:53:22 np0005580781 python3.9[53290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:53:23 np0005580781 python3.9[53444]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:53:24 np0005580781 python3.9[53638]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:53:25 np0005580781 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 10 11:53:25 np0005580781 systemd[1]: session-11.scope: Deactivated successfully.
Jan 10 11:53:25 np0005580781 systemd[1]: session-11.scope: Consumed 2.480s CPU time.
Jan 10 11:53:25 np0005580781 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Jan 10 11:53:25 np0005580781 systemd-logind[798]: Removed session 11.
Jan 10 11:53:30 np0005580781 systemd-logind[798]: New session 12 of user zuul.
Jan 10 11:53:30 np0005580781 systemd[1]: Started Session 12 of User zuul.
Jan 10 11:53:31 np0005580781 python3.9[53820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:53:32 np0005580781 python3.9[53974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:53:33 np0005580781 python3.9[54131]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:53:34 np0005580781 python3.9[54215]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:53:37 np0005580781 python3.9[54369]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:53:38 np0005580781 python3.9[54564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:53:39 np0005580781 python3.9[54716]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:53:39 np0005580781 systemd[1]: var-lib-containers-storage-overlay-compat2616064998-merged.mount: Deactivated successfully.
Jan 10 11:53:39 np0005580781 podman[54717]: 2026-01-10 16:53:39.522741127 +0000 UTC m=+0.080718320 system refresh
Jan 10 11:53:40 np0005580781 python3.9[54879]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:53:40 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:53:41 np0005580781 python3.9[55002]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064019.7254739-74-134145731168203/.source.json follow=False _original_basename=podman_network_config.j2 checksum=eb065b71cd2cac8ce28582061d6993c967907242 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:53:41 np0005580781 python3.9[55154]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:53:42 np0005580781 python3.9[55277]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064021.225296-89-58555669213042/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e054e42fc917865162376c34713b3d5516074d23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:53:43 np0005580781 python3.9[55429]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:53:43 np0005580781 python3.9[55581]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:53:44 np0005580781 python3.9[55733]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:53:45 np0005580781 python3.9[55885]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:53:45 np0005580781 python3.9[56037]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:53:48 np0005580781 python3.9[56190]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:53:48 np0005580781 python3.9[56344]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:53:49 np0005580781 python3.9[56496]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:53:50 np0005580781 python3.9[56648]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:53:51 np0005580781 python3.9[56801]: ansible-service_facts Invoked
Jan 10 11:53:51 np0005580781 network[56818]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 11:53:51 np0005580781 network[56819]: 'network-scripts' will be removed from distribution in near future.
Jan 10 11:53:51 np0005580781 network[56820]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 11:53:58 np0005580781 python3.9[57272]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 11:54:01 np0005580781 python3.9[57425]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 10 11:54:02 np0005580781 python3.9[57577]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:02 np0005580781 python3.9[57702]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064041.8977213-233-100695284981687/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:03 np0005580781 python3.9[57856]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:04 np0005580781 python3.9[57981]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064043.1907427-248-195756869997730/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:05 np0005580781 python3.9[58135]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:06 np0005580781 python3.9[58289]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:54:07 np0005580781 python3.9[58373]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:54:08 np0005580781 python3.9[58527]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:54:09 np0005580781 python3.9[58611]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:54:09 np0005580781 chronyd[785]: chronyd exiting
Jan 10 11:54:09 np0005580781 systemd[1]: Stopping NTP client/server...
Jan 10 11:54:09 np0005580781 systemd[1]: chronyd.service: Deactivated successfully.
Jan 10 11:54:09 np0005580781 systemd[1]: Stopped NTP client/server.
Jan 10 11:54:09 np0005580781 systemd[1]: Starting NTP client/server...
Jan 10 11:54:09 np0005580781 chronyd[58619]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 10 11:54:09 np0005580781 chronyd[58619]: Frequency -28.332 +/- 0.098 ppm read from /var/lib/chrony/drift
Jan 10 11:54:09 np0005580781 chronyd[58619]: Loaded seccomp filter (level 2)
Jan 10 11:54:09 np0005580781 systemd[1]: Started NTP client/server.
Jan 10 11:54:10 np0005580781 systemd[1]: session-12.scope: Deactivated successfully.
Jan 10 11:54:10 np0005580781 systemd[1]: session-12.scope: Consumed 27.578s CPU time.
Jan 10 11:54:10 np0005580781 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Jan 10 11:54:10 np0005580781 systemd-logind[798]: Removed session 12.
Jan 10 11:54:15 np0005580781 systemd-logind[798]: New session 13 of user zuul.
Jan 10 11:54:15 np0005580781 systemd[1]: Started Session 13 of User zuul.
Jan 10 11:54:16 np0005580781 python3.9[58800]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:17 np0005580781 python3.9[58952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:18 np0005580781 python3.9[59075]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064057.0053873-29-92772517778425/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:18 np0005580781 systemd[1]: session-13.scope: Deactivated successfully.
Jan 10 11:54:18 np0005580781 systemd[1]: session-13.scope: Consumed 2.013s CPU time.
Jan 10 11:54:18 np0005580781 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Jan 10 11:54:18 np0005580781 systemd-logind[798]: Removed session 13.
Jan 10 11:54:24 np0005580781 systemd-logind[798]: New session 14 of user zuul.
Jan 10 11:54:24 np0005580781 systemd[1]: Started Session 14 of User zuul.
Jan 10 11:54:25 np0005580781 python3.9[59253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:54:27 np0005580781 python3.9[59409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:28 np0005580781 python3.9[59584]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:28 np0005580781 python3.9[59707]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1768064067.3679338-36-117475129275537/.source.json _original_basename=.yxzqarkb follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:29 np0005580781 python3.9[59859]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:30 np0005580781 python3.9[59982]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064069.2496724-59-128005440004113/.source _original_basename=.i_7mca7n follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:30 np0005580781 python3.9[60134]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:54:31 np0005580781 python3.9[60286]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:32 np0005580781 python3.9[60409]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064071.0787003-83-192343670111554/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:54:32 np0005580781 python3.9[60561]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:33 np0005580781 python3.9[60684]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064072.3497322-83-266489083251085/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 11:54:34 np0005580781 python3.9[60836]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:34 np0005580781 python3.9[60988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:35 np0005580781 python3.9[61111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064074.278659-120-135229664252619/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:36 np0005580781 python3.9[61263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:36 np0005580781 python3.9[61386]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064075.6654694-135-32427567519276/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:37 np0005580781 python3.9[61538]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:54:37 np0005580781 systemd[1]: Reloading.
Jan 10 11:54:37 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:54:37 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:54:38 np0005580781 systemd[1]: Reloading.
Jan 10 11:54:38 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:54:38 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:54:38 np0005580781 systemd[1]: Starting EDPM Container Shutdown...
Jan 10 11:54:38 np0005580781 systemd[1]: Finished EDPM Container Shutdown.
Jan 10 11:54:39 np0005580781 python3.9[61764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:39 np0005580781 python3.9[61887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064078.5615966-158-259876866495762/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:40 np0005580781 python3.9[62039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:40 np0005580781 python3.9[62162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064079.708673-173-213946176817083/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:41 np0005580781 python3.9[62314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:54:41 np0005580781 systemd[1]: Reloading.
Jan 10 11:54:41 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:54:41 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:54:41 np0005580781 systemd[1]: Reloading.
Jan 10 11:54:41 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:54:41 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:54:42 np0005580781 systemd[1]: Starting Create netns directory...
Jan 10 11:54:42 np0005580781 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 11:54:42 np0005580781 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 11:54:42 np0005580781 systemd[1]: Finished Create netns directory.
Jan 10 11:54:42 np0005580781 python3.9[62540]: ansible-ansible.builtin.service_facts Invoked
Jan 10 11:54:43 np0005580781 network[62557]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 11:54:43 np0005580781 network[62558]: 'network-scripts' will be removed from distribution in near future.
Jan 10 11:54:43 np0005580781 network[62559]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 11:54:49 np0005580781 python3.9[62821]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:54:49 np0005580781 systemd[1]: Reloading.
Jan 10 11:54:49 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:54:49 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:54:49 np0005580781 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 10 11:54:49 np0005580781 iptables.init[62861]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 10 11:54:49 np0005580781 iptables.init[62861]: iptables: Flushing firewall rules: [  OK  ]
Jan 10 11:54:49 np0005580781 systemd[1]: iptables.service: Deactivated successfully.
Jan 10 11:54:49 np0005580781 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 10 11:54:50 np0005580781 python3.9[63057]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:54:51 np0005580781 python3.9[63211]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:54:51 np0005580781 systemd[1]: Reloading.
Jan 10 11:54:51 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:54:51 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:54:51 np0005580781 systemd[1]: Starting Netfilter Tables...
Jan 10 11:54:51 np0005580781 systemd[1]: Finished Netfilter Tables.
Jan 10 11:54:52 np0005580781 python3.9[63404]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:54:53 np0005580781 python3.9[63557]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:54 np0005580781 python3.9[63682]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064092.9674098-242-123190661667199/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:54 np0005580781 python3.9[63835]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:54:55 np0005580781 systemd[1]: Reloading OpenSSH server daemon...
Jan 10 11:54:55 np0005580781 systemd[1]: Reloaded OpenSSH server daemon.
Jan 10 11:54:55 np0005580781 python3.9[63991]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:56 np0005580781 python3.9[64143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:54:57 np0005580781 python3.9[64266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064095.9862332-273-66539776044936/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:58 np0005580781 python3.9[64418]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 10 11:54:58 np0005580781 systemd[1]: Starting Time & Date Service...
Jan 10 11:54:58 np0005580781 systemd[1]: Started Time & Date Service.
Jan 10 11:54:58 np0005580781 python3.9[64574]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:54:59 np0005580781 python3.9[64726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:00 np0005580781 python3.9[64849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064099.0083985-308-237106065279914/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:00 np0005580781 python3.9[65001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:01 np0005580781 python3.9[65124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064100.2082083-323-171115577821664/.source.yaml _original_basename=.6jocap2c follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:01 np0005580781 python3.9[65276]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:02 np0005580781 python3.9[65399]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064101.4290164-338-43507799260864/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:03 np0005580781 python3.9[65552]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:55:03 np0005580781 python3.9[65705]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:55:04 np0005580781 python3[65858]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 11:55:05 np0005580781 python3.9[66010]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:05 np0005580781 python3.9[66133]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064104.78193-377-27315577260614/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:06 np0005580781 python3.9[66285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:07 np0005580781 python3.9[66408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064106.0899832-392-98624383713792/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:07 np0005580781 python3.9[66560]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:08 np0005580781 python3.9[66683]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064107.3581686-407-59295820658138/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:09 np0005580781 python3.9[66835]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:09 np0005580781 python3.9[66958]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064108.542549-422-62426910618462/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:10 np0005580781 python3.9[67110]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 11:55:11 np0005580781 python3.9[67233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064109.858952-437-137312440973/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:11 np0005580781 python3.9[67385]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:12 np0005580781 python3.9[67537]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:55:13 np0005580781 python3.9[67696]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:14 np0005580781 python3.9[67849]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:14 np0005580781 python3.9[68001]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:15 np0005580781 python3.9[68153]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 11:55:16 np0005580781 python3.9[68306]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 11:55:17 np0005580781 systemd[1]: session-14.scope: Deactivated successfully.
Jan 10 11:55:17 np0005580781 systemd[1]: session-14.scope: Consumed 37.985s CPU time.
Jan 10 11:55:17 np0005580781 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Jan 10 11:55:17 np0005580781 systemd-logind[798]: Removed session 14.
Jan 10 11:55:22 np0005580781 systemd-logind[798]: New session 15 of user zuul.
Jan 10 11:55:22 np0005580781 systemd[1]: Started Session 15 of User zuul.
Jan 10 11:55:23 np0005580781 python3.9[68487]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 10 11:55:24 np0005580781 python3.9[68639]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:55:25 np0005580781 python3.9[68791]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:55:26 np0005580781 python3.9[68943]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLbf1u7QZKIo5G+YWiNhcXI+Bt6YV4GfE/ux3dizYMgWBt9o+PmlYYMiVREbRw0Bbw1ytXXbF5+nj3Xb2CXI8ussGl0WspjKSeiZ6iZLcZTiCJLgJ/9hsvwXR//dQk9MHjPU21/f9Bmm5bXO7JD6wyeZ6BhNNSRil+tMQ9dtlaRlLoSzr5CXtKSgvp0EnFO/wO0yIjn5vj0Kg53pKe6PklqqbDKQe4B3RTSjCo711H66GqFuA0OZDkpKEVqdQFy9HUPAxgflwamxh1bRZYQ4oZ+sRK0y7Aau5nyIxefmh+nrgkwpuGnfu/PBcFHlgDpGdK5SR2MN7oUwfJtJl+qp1MFaUz+TRF7THXK8e6MCD0RPGfqlim6D6qGfKkbBYM50kTncYakPtGOrLbf/hARiTSEduglbNBYv0vatpv1emwjOPwkAu3DZdOi4PokhOq+BnOnG95UH3ZzOWO+UnNEiCQgCu7NbzJOFb/KoBU8XRT1o8yPWdpwQ+mKGFE1PGsA7k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVw/TzKh+QQYsI9HFUl2xKC/Iozkh6C2Rlm1r7qShYC#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHuUq5M0wkVhsnk90cNjQOZixGqQR1X/PXyTQuPIQfBmEkOk4KlPkJk1al+bzULcCOXjdbnilDQbL6yRpQlhrU=#012 create=True mode=0644 path=/tmp/ansible.v4k87suj state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:27 np0005580781 python3.9[69095]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.v4k87suj' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:55:27 np0005580781 python3.9[69249]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.v4k87suj state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:28 np0005580781 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 10 11:55:28 np0005580781 systemd[1]: session-15.scope: Deactivated successfully.
Jan 10 11:55:28 np0005580781 systemd[1]: session-15.scope: Consumed 3.653s CPU time.
Jan 10 11:55:28 np0005580781 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Jan 10 11:55:28 np0005580781 systemd-logind[798]: Removed session 15.
Jan 10 11:55:33 np0005580781 systemd-logind[798]: New session 16 of user zuul.
Jan 10 11:55:33 np0005580781 systemd[1]: Started Session 16 of User zuul.
Jan 10 11:55:34 np0005580781 python3.9[69429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:55:35 np0005580781 python3.9[69585]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 10 11:55:36 np0005580781 python3.9[69739]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 11:55:38 np0005580781 python3.9[69892]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:55:39 np0005580781 python3.9[70045]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:55:39 np0005580781 python3.9[70199]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:55:40 np0005580781 python3.9[70354]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:55:41 np0005580781 systemd[1]: session-16.scope: Deactivated successfully.
Jan 10 11:55:41 np0005580781 systemd[1]: session-16.scope: Consumed 4.741s CPU time.
Jan 10 11:55:41 np0005580781 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Jan 10 11:55:41 np0005580781 systemd-logind[798]: Removed session 16.
Jan 10 11:55:46 np0005580781 systemd-logind[798]: New session 17 of user zuul.
Jan 10 11:55:46 np0005580781 systemd[1]: Started Session 17 of User zuul.
Jan 10 11:55:47 np0005580781 python3.9[70532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:55:48 np0005580781 python3.9[70688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 11:55:49 np0005580781 python3.9[70772]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 11:55:52 np0005580781 python3.9[70923]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:55:53 np0005580781 python3.9[71074]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 11:55:54 np0005580781 python3.9[71224]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:55:54 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 11:55:54 np0005580781 python3.9[71375]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 11:55:55 np0005580781 systemd[1]: session-17.scope: Deactivated successfully.
Jan 10 11:55:55 np0005580781 systemd[1]: session-17.scope: Consumed 6.417s CPU time.
Jan 10 11:55:55 np0005580781 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Jan 10 11:55:55 np0005580781 systemd-logind[798]: Removed session 17.
Jan 10 11:56:02 np0005580781 systemd-logind[798]: New session 18 of user zuul.
Jan 10 11:56:02 np0005580781 systemd[1]: Started Session 18 of User zuul.
Jan 10 11:56:08 np0005580781 python3[72141]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:56:10 np0005580781 python3[72236]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 11:56:11 np0005580781 python3[72263]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:56:12 np0005580781 python3[72289]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:12 np0005580781 kernel: loop: module loaded
Jan 10 11:56:12 np0005580781 kernel: loop3: detected capacity change from 0 to 41943040
Jan 10 11:56:12 np0005580781 python3[72324]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:12 np0005580781 lvm[72327]: PV /dev/loop3 not used.
Jan 10 11:56:12 np0005580781 lvm[72329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:56:12 np0005580781 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 10 11:56:12 np0005580781 lvm[72339]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:56:12 np0005580781 lvm[72339]: VG ceph_vg0 finished
Jan 10 11:56:12 np0005580781 lvm[72337]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 10 11:56:12 np0005580781 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 10 11:56:13 np0005580781 python3[72417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:56:13 np0005580781 python3[72490]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064172.9395354-36189-63096646957512/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:56:14 np0005580781 python3[72540]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:56:14 np0005580781 systemd[1]: Reloading.
Jan 10 11:56:14 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:56:14 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:56:14 np0005580781 systemd[1]: Starting Ceph OSD losetup...
Jan 10 11:56:14 np0005580781 bash[72579]: /dev/loop3: [64513]:4348699 (/var/lib/ceph-osd-0.img)
Jan 10 11:56:14 np0005580781 systemd[1]: Finished Ceph OSD losetup.
Jan 10 11:56:14 np0005580781 lvm[72580]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:56:14 np0005580781 lvm[72580]: VG ceph_vg0 finished
Jan 10 11:56:15 np0005580781 python3[72606]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 11:56:17 np0005580781 python3[72633]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:56:17 np0005580781 python3[72659]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:17 np0005580781 kernel: loop4: detected capacity change from 0 to 41943040
Jan 10 11:56:17 np0005580781 python3[72691]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:17 np0005580781 lvm[72694]: PV /dev/loop4 not used.
Jan 10 11:56:17 np0005580781 lvm[72696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:56:17 np0005580781 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 10 11:56:17 np0005580781 lvm[72702]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 10 11:56:17 np0005580781 lvm[72707]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:56:17 np0005580781 lvm[72707]: VG ceph_vg1 finished
Jan 10 11:56:17 np0005580781 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 10 11:56:18 np0005580781 python3[72785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:56:18 np0005580781 python3[72858]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064178.1484025-36218-113938940877657/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:56:19 np0005580781 chronyd[58619]: Selected source 142.4.192.253 (pool.ntp.org)
Jan 10 11:56:19 np0005580781 python3[72908]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:56:19 np0005580781 systemd[1]: Reloading.
Jan 10 11:56:19 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:56:19 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:56:19 np0005580781 systemd[1]: Starting Ceph OSD losetup...
Jan 10 11:56:19 np0005580781 bash[72948]: /dev/loop4: [64513]:4348710 (/var/lib/ceph-osd-1.img)
Jan 10 11:56:19 np0005580781 systemd[1]: Finished Ceph OSD losetup.
Jan 10 11:56:19 np0005580781 lvm[72949]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:56:19 np0005580781 lvm[72949]: VG ceph_vg1 finished
Jan 10 11:56:20 np0005580781 python3[72975]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 11:56:21 np0005580781 python3[73002]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:56:22 np0005580781 python3[73028]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:22 np0005580781 kernel: loop5: detected capacity change from 0 to 41943040
Jan 10 11:56:22 np0005580781 python3[73060]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:22 np0005580781 lvm[73063]: PV /dev/loop5 not used.
Jan 10 11:56:22 np0005580781 lvm[73065]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:56:22 np0005580781 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 10 11:56:22 np0005580781 lvm[73076]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:56:22 np0005580781 lvm[73076]: VG ceph_vg2 finished
Jan 10 11:56:22 np0005580781 lvm[73074]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 10 11:56:22 np0005580781 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 10 11:56:23 np0005580781 python3[73154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:56:23 np0005580781 python3[73227]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064182.9052546-36245-239685696138934/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:56:24 np0005580781 python3[73277]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 11:56:24 np0005580781 systemd[1]: Reloading.
Jan 10 11:56:24 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:56:24 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:56:24 np0005580781 systemd[1]: Starting Ceph OSD losetup...
Jan 10 11:56:24 np0005580781 bash[73316]: /dev/loop5: [64513]:4348783 (/var/lib/ceph-osd-2.img)
Jan 10 11:56:24 np0005580781 systemd[1]: Finished Ceph OSD losetup.
Jan 10 11:56:24 np0005580781 lvm[73317]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:56:24 np0005580781 lvm[73317]: VG ceph_vg2 finished
Jan 10 11:56:26 np0005580781 python3[73341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:56:28 np0005580781 python3[73434]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 11:56:31 np0005580781 python3[73491]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 10 11:56:34 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 11:56:34 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 11:56:35 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 11:56:35 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 11:56:35 np0005580781 systemd[1]: run-r4b63edc7b24945c2b06fa4660b50dc25.service: Deactivated successfully.
Jan 10 11:56:35 np0005580781 python3[73611]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:56:35 np0005580781 python3[73639]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:36 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:56:36 np0005580781 python3[73679]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:56:37 np0005580781 python3[73705]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:56:37 np0005580781 python3[73783]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:56:38 np0005580781 python3[73856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064197.579233-36393-169975549169158/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:56:39 np0005580781 python3[73958]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:56:39 np0005580781 python3[74031]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064198.7679796-36411-175565344358162/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:56:39 np0005580781 python3[74081]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:56:40 np0005580781 python3[74109]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:56:40 np0005580781 python3[74137]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:56:40 np0005580781 python3[74165]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:56:41 np0005580781 systemd-logind[798]: New session 19 of user ceph-admin.
Jan 10 11:56:41 np0005580781 systemd[1]: Created slice User Slice of UID 42477.
Jan 10 11:56:41 np0005580781 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 10 11:56:41 np0005580781 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 10 11:56:41 np0005580781 systemd[1]: Starting User Manager for UID 42477...
Jan 10 11:56:41 np0005580781 systemd[74173]: Queued start job for default target Main User Target.
Jan 10 11:56:41 np0005580781 systemd[74173]: Created slice User Application Slice.
Jan 10 11:56:41 np0005580781 systemd[74173]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 10 11:56:41 np0005580781 systemd[74173]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 11:56:41 np0005580781 systemd[74173]: Reached target Paths.
Jan 10 11:56:41 np0005580781 systemd[74173]: Reached target Timers.
Jan 10 11:56:41 np0005580781 systemd[74173]: Starting D-Bus User Message Bus Socket...
Jan 10 11:56:41 np0005580781 systemd[74173]: Starting Create User's Volatile Files and Directories...
Jan 10 11:56:41 np0005580781 systemd[74173]: Finished Create User's Volatile Files and Directories.
Jan 10 11:56:41 np0005580781 systemd[74173]: Listening on D-Bus User Message Bus Socket.
Jan 10 11:56:41 np0005580781 systemd[74173]: Reached target Sockets.
Jan 10 11:56:41 np0005580781 systemd[74173]: Reached target Basic System.
Jan 10 11:56:41 np0005580781 systemd[1]: Started User Manager for UID 42477.
Jan 10 11:56:41 np0005580781 systemd[74173]: Reached target Main User Target.
Jan 10 11:56:41 np0005580781 systemd[74173]: Startup finished in 143ms.
Jan 10 11:56:41 np0005580781 systemd[1]: Started Session 19 of User ceph-admin.
Jan 10 11:56:41 np0005580781 systemd[1]: session-19.scope: Deactivated successfully.
Jan 10 11:56:41 np0005580781 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Jan 10 11:56:41 np0005580781 systemd-logind[798]: Removed session 19.
Jan 10 11:56:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:56:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:56:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-compat2903914786-lower\x2dmapped.mount: Deactivated successfully.
Jan 10 11:56:51 np0005580781 systemd[1]: Stopping User Manager for UID 42477...
Jan 10 11:56:51 np0005580781 systemd[74173]: Activating special unit Exit the Session...
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped target Main User Target.
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped target Basic System.
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped target Paths.
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped target Sockets.
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped target Timers.
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 10 11:56:51 np0005580781 systemd[74173]: Closed D-Bus User Message Bus Socket.
Jan 10 11:56:51 np0005580781 systemd[74173]: Stopped Create User's Volatile Files and Directories.
Jan 10 11:56:51 np0005580781 systemd[74173]: Removed slice User Application Slice.
Jan 10 11:56:51 np0005580781 systemd[74173]: Reached target Shutdown.
Jan 10 11:56:51 np0005580781 systemd[74173]: Finished Exit the Session.
Jan 10 11:56:51 np0005580781 systemd[74173]: Reached target Exit the Session.
Jan 10 11:56:51 np0005580781 systemd[1]: user@42477.service: Deactivated successfully.
Jan 10 11:56:51 np0005580781 systemd[1]: Stopped User Manager for UID 42477.
Jan 10 11:56:51 np0005580781 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 10 11:56:51 np0005580781 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 10 11:56:51 np0005580781 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 10 11:56:51 np0005580781 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 10 11:56:51 np0005580781 systemd[1]: Removed slice User Slice of UID 42477.
Jan 10 11:57:11 np0005580781 podman[74267]: 2026-01-10 16:57:11.859651117 +0000 UTC m=+30.117093316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:11 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74379]: 2026-01-10 16:57:11.926566953 +0000 UTC m=+0.033486680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:12 np0005580781 podman[74379]: 2026-01-10 16:57:12.131291583 +0000 UTC m=+0.238211310 container create e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 11:57:12 np0005580781 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 10 11:57:12 np0005580781 systemd[1]: Started libpod-conmon-e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3.scope.
Jan 10 11:57:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:12 np0005580781 podman[74379]: 2026-01-10 16:57:12.275257072 +0000 UTC m=+0.382176869 container init e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 11:57:12 np0005580781 podman[74379]: 2026-01-10 16:57:12.285691797 +0000 UTC m=+0.392611524 container start e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 11:57:12 np0005580781 podman[74379]: 2026-01-10 16:57:12.289342231 +0000 UTC m=+0.396261988 container attach e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:12 np0005580781 sharp_shirley[74395]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 10 11:57:12 np0005580781 systemd[1]: libpod-e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3.scope: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74379]: 2026-01-10 16:57:12.400593843 +0000 UTC m=+0.507513550 container died e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:12 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ea7eb89fcda6dc3265814bb505d59de5d40338a2ed4b3d91da45e3b3d1f136f5-merged.mount: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74379]: 2026-01-10 16:57:12.442529011 +0000 UTC m=+0.549448718 container remove e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3 (image=quay.io/ceph/ceph:v20, name=sharp_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 11:57:12 np0005580781 systemd[1]: libpod-conmon-e802adbb93cb267bd67473dde14e2df87ec3cec800c3718b454ce4fff9b3cdb3.scope: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74411]: 2026-01-10 16:57:12.515090626 +0000 UTC m=+0.047247859 container create b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 11:57:12 np0005580781 systemd[1]: Started libpod-conmon-b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4.scope.
Jan 10 11:57:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:12 np0005580781 podman[74411]: 2026-01-10 16:57:12.584292867 +0000 UTC m=+0.116450110 container init b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 10 11:57:12 np0005580781 podman[74411]: 2026-01-10 16:57:12.591853581 +0000 UTC m=+0.124010814 container start b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:12 np0005580781 podman[74411]: 2026-01-10 16:57:12.496669264 +0000 UTC m=+0.028826507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:12 np0005580781 podman[74411]: 2026-01-10 16:57:12.595451843 +0000 UTC m=+0.127609066 container attach b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:12 np0005580781 lucid_panini[74427]: 167 167
Jan 10 11:57:12 np0005580781 systemd[1]: libpod-b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4.scope: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74411]: 2026-01-10 16:57:12.597142011 +0000 UTC m=+0.129299234 container died b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:12 np0005580781 podman[74411]: 2026-01-10 16:57:12.637219286 +0000 UTC m=+0.169376509 container remove b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4 (image=quay.io/ceph/ceph:v20, name=lucid_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:12 np0005580781 systemd[1]: libpod-conmon-b72d23be6a458b237322b042eea08a758dd09e4607d74733b011e58535d3e8e4.scope: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74447]: 2026-01-10 16:57:12.70266735 +0000 UTC m=+0.044620275 container create c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 10 11:57:12 np0005580781 systemd[1]: Started libpod-conmon-c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035.scope.
Jan 10 11:57:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:12 np0005580781 podman[74447]: 2026-01-10 16:57:12.765050008 +0000 UTC m=+0.107002973 container init c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:12 np0005580781 podman[74447]: 2026-01-10 16:57:12.77288377 +0000 UTC m=+0.114836705 container start c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:12 np0005580781 podman[74447]: 2026-01-10 16:57:12.681357207 +0000 UTC m=+0.023310212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:12 np0005580781 podman[74447]: 2026-01-10 16:57:12.776283936 +0000 UTC m=+0.118236891 container attach c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 11:57:12 np0005580781 sharp_sanderson[74463]: AQDohGJp80dVMBAAeHZKHzzJeJb07qXvmjPA9w==
Jan 10 11:57:12 np0005580781 systemd[1]: libpod-c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035.scope: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74447]: 2026-01-10 16:57:12.815125207 +0000 UTC m=+0.157078162 container died c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 11:57:12 np0005580781 podman[74447]: 2026-01-10 16:57:12.853573216 +0000 UTC m=+0.195526151 container remove c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035 (image=quay.io/ceph/ceph:v20, name=sharp_sanderson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:12 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:12 np0005580781 systemd[1]: libpod-conmon-c9de09510d04b42b100ff032a92b43e29d0b6f4404dadc7f90185ad77a0f9035.scope: Deactivated successfully.
Jan 10 11:57:12 np0005580781 podman[74483]: 2026-01-10 16:57:12.959495137 +0000 UTC m=+0.071701403 container create c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:12 np0005580781 systemd[1]: Started libpod-conmon-c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5.scope.
Jan 10 11:57:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:13 np0005580781 podman[74483]: 2026-01-10 16:57:13.02457037 +0000 UTC m=+0.136776666 container init c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:13 np0005580781 podman[74483]: 2026-01-10 16:57:12.930108464 +0000 UTC m=+0.042314810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:13 np0005580781 podman[74483]: 2026-01-10 16:57:13.031480506 +0000 UTC m=+0.143686762 container start c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 11:57:13 np0005580781 podman[74483]: 2026-01-10 16:57:13.035220302 +0000 UTC m=+0.147426598 container attach c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 11:57:13 np0005580781 gallant_herschel[74499]: AQDphGJpi72pAxAAsj9DdzBXQVI15FyQV5JZkg==
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5.scope: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74483]: 2026-01-10 16:57:13.065748307 +0000 UTC m=+0.177954563 container died c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ccef3c0ee65cfd73bbb86db1e59557317f390da9e22aaa8eebce1232127c8a59-merged.mount: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74483]: 2026-01-10 16:57:13.109101975 +0000 UTC m=+0.221308271 container remove c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5 (image=quay.io/ceph/ceph:v20, name=gallant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-conmon-c70089789ba3cc7f961fbcba7116217b7e3a91f962ae5eabd9274e58fba79fa5.scope: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74517]: 2026-01-10 16:57:13.191808978 +0000 UTC m=+0.057184821 container create b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 11:57:13 np0005580781 systemd[1]: Started libpod-conmon-b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f.scope.
Jan 10 11:57:13 np0005580781 podman[74517]: 2026-01-10 16:57:13.166065599 +0000 UTC m=+0.031441512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:13 np0005580781 podman[74517]: 2026-01-10 16:57:13.274922453 +0000 UTC m=+0.140298436 container init b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:13 np0005580781 podman[74517]: 2026-01-10 16:57:13.280686316 +0000 UTC m=+0.146062209 container start b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 11:57:13 np0005580781 podman[74517]: 2026-01-10 16:57:13.284474904 +0000 UTC m=+0.149850777 container attach b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:13 np0005580781 elegant_galileo[74534]: AQDphGJpbr0kEhAAJ0IYbjf4Qbpg+ZFhEbWXjA==
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f.scope: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74517]: 2026-01-10 16:57:13.308453163 +0000 UTC m=+0.173829016 container died b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:13 np0005580781 podman[74517]: 2026-01-10 16:57:13.348979031 +0000 UTC m=+0.214354874 container remove b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f (image=quay.io/ceph/ceph:v20, name=elegant_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-conmon-b73823b28b88e78befca3dd5f4a2fc558ef5dad76082d2e479f09abf1c25b78f.scope: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74553]: 2026-01-10 16:57:13.460246753 +0000 UTC m=+0.075952152 container create 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:13 np0005580781 systemd[1]: Started libpod-conmon-16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36.scope.
Jan 10 11:57:13 np0005580781 podman[74553]: 2026-01-10 16:57:13.431506849 +0000 UTC m=+0.047212298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d49bcf623d76ed34b2319c4723529e2a300102eb4332b3d9e68a4776ac86d9/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:13 np0005580781 podman[74553]: 2026-01-10 16:57:13.538362176 +0000 UTC m=+0.154067565 container init 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:13 np0005580781 podman[74553]: 2026-01-10 16:57:13.546333472 +0000 UTC m=+0.162038841 container start 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 11:57:13 np0005580781 podman[74553]: 2026-01-10 16:57:13.550464889 +0000 UTC m=+0.166170458 container attach 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 11:57:13 np0005580781 nervous_lewin[74569]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 10 11:57:13 np0005580781 nervous_lewin[74569]: setting min_mon_release = tentacle
Jan 10 11:57:13 np0005580781 nervous_lewin[74569]: /usr/bin/monmaptool: set fsid to a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:13 np0005580781 nervous_lewin[74569]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36.scope: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74553]: 2026-01-10 16:57:13.596854904 +0000 UTC m=+0.212560283 container died 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:13 np0005580781 podman[74553]: 2026-01-10 16:57:13.640416978 +0000 UTC m=+0.256122357 container remove 16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36 (image=quay.io/ceph/ceph:v20, name=nervous_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-conmon-16b41617cfeecc3df36c40749dae1995380a7d14defd1dbe09c3d200eb211a36.scope: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74587]: 2026-01-10 16:57:13.703041642 +0000 UTC m=+0.039891951 container create 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:13 np0005580781 systemd[1]: Started libpod-conmon-61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16.scope.
Jan 10 11:57:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:13 np0005580781 podman[74587]: 2026-01-10 16:57:13.774310661 +0000 UTC m=+0.111160980 container init 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:13 np0005580781 podman[74587]: 2026-01-10 16:57:13.685953978 +0000 UTC m=+0.022804297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:13 np0005580781 podman[74587]: 2026-01-10 16:57:13.782638957 +0000 UTC m=+0.119489256 container start 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:13 np0005580781 podman[74587]: 2026-01-10 16:57:13.786044344 +0000 UTC m=+0.122894673 container attach 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16.scope: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74587]: 2026-01-10 16:57:13.895657179 +0000 UTC m=+0.232507478 container died 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 11:57:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3b60aeee7fbf04c75393da24ad7f3060d5f037fbeb15e4b152749232d7c3a8a7-merged.mount: Deactivated successfully.
Jan 10 11:57:13 np0005580781 podman[74587]: 2026-01-10 16:57:13.934463658 +0000 UTC m=+0.271313957 container remove 61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16 (image=quay.io/ceph/ceph:v20, name=quirky_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 11:57:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:13 np0005580781 systemd[1]: libpod-conmon-61a37880fdd8209f914bd8056e9d679c097f9d94f91ad86a1ec79c3c8aff2e16.scope: Deactivated successfully.
Jan 10 11:57:14 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:14 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:14 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:14 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:14 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:14 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:14 np0005580781 systemd[1]: Reached target All Ceph clusters and services.
Jan 10 11:57:14 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:14 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:14 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:14 np0005580781 systemd[1]: Reached target Ceph cluster a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:57:14 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:14 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:14 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:15 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:15 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:15 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:15 np0005580781 systemd[1]: Created slice Slice /system/ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:57:15 np0005580781 systemd[1]: Reached target System Time Set.
Jan 10 11:57:15 np0005580781 systemd[1]: Reached target System Time Synchronized.
Jan 10 11:57:15 np0005580781 systemd[1]: Starting Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:57:15 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:15 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:15 np0005580781 podman[74880]: 2026-01-10 16:57:15.605490079 +0000 UTC m=+0.043597156 container create fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 11:57:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:15 np0005580781 podman[74880]: 2026-01-10 16:57:15.678091106 +0000 UTC m=+0.116198163 container init fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 11:57:15 np0005580781 podman[74880]: 2026-01-10 16:57:15.684074796 +0000 UTC m=+0.122181833 container start fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:15 np0005580781 podman[74880]: 2026-01-10 16:57:15.588476657 +0000 UTC m=+0.026583714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:15 np0005580781 bash[74880]: fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8
Jan 10 11:57:15 np0005580781 systemd[1]: Started Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: pidfile_write: ignore empty --pid-file
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: load: jerasure load: lrc 
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Git sha 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: DB SUMMARY
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: DB Session ID:  CJOGPED9GW0POJY2FBQK
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                                     Options.env: 0x55984a925440
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                                Options.info_log: 0x55984cb7b3e0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                                 Options.wal_dir: 
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                    Options.write_buffer_manager: 0x55984cafa140
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                               Options.row_cache: None
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                              Options.wal_filter: None
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.wal_compression: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.max_background_jobs: 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.max_total_wal_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:       Options.compaction_readahead_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Compression algorithms supported:
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kZSTD supported: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:           Options.merge_operator: 
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:        Options.compaction_filter: None
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55984cb06600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55984caeb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:        Options.write_buffer_size: 33554432
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:  Options.max_write_buffer_number: 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:          Options.compression: NoCompression
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.num_levels: 7
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064235729896, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064235732603, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "CJOGPED9GW0POJY2FBQK", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064235732756, "job": 1, "event": "recovery_finished"}
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55984cb18e00
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: DB pointer 0x55984cc64000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55984caeb8d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@-1(???) e0 preinit fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : last_changed 2026-01-10T16:57:13.592121+0000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : created 2026-01-10T16:57:13.592121+0000
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-10T16:57:13.831154Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Dec 29 08:24:22 UTC 2025,kernel_version=5.14.0-655.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).mds e1 new map
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-10T16:57:15:771836+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : fsmap 
Jan 10 11:57:15 np0005580781 podman[74901]: 2026-01-10 16:57:15.782514154 +0000 UTC m=+0.057174660 container create ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mkfs a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 10 11:57:15 np0005580781 ceph-mon[74900]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 11:57:15 np0005580781 systemd[1]: Started libpod-conmon-ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042.scope.
Jan 10 11:57:15 np0005580781 podman[74901]: 2026-01-10 16:57:15.758799813 +0000 UTC m=+0.033460379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:15 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f566ea5fc515376a682034bfbf98e7e00582f79273c577c5ab68c7e0c0023ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f566ea5fc515376a682034bfbf98e7e00582f79273c577c5ab68c7e0c0023ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f566ea5fc515376a682034bfbf98e7e00582f79273c577c5ab68c7e0c0023ff/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:15 np0005580781 podman[74901]: 2026-01-10 16:57:15.880459149 +0000 UTC m=+0.155119705 container init ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 11:57:15 np0005580781 podman[74901]: 2026-01-10 16:57:15.891624316 +0000 UTC m=+0.166284862 container start ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 11:57:15 np0005580781 podman[74901]: 2026-01-10 16:57:15.89565896 +0000 UTC m=+0.170319506 container attach ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885305628' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:  cluster:
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    id:     a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    health: HEALTH_OK
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]: 
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:  services:
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    mon: 1 daemons, quorum compute-0 (age 0.327597s) [leader: compute-0]
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    mgr: no daemons active
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    osd: 0 osds: 0 up, 0 in
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]: 
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:  data:
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    pools:   0 pools, 0 pgs
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    objects: 0 objects, 0 B
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    usage:   0 B used, 0 B / 0 B avail
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]:    pgs:     
Jan 10 11:57:16 np0005580781 hopeful_mahavira[74955]: 
Jan 10 11:57:16 np0005580781 systemd[1]: libpod-ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042.scope: Deactivated successfully.
Jan 10 11:57:16 np0005580781 podman[74901]: 2026-01-10 16:57:16.11486566 +0000 UTC m=+0.389526226 container died ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:16 np0005580781 podman[74901]: 2026-01-10 16:57:16.156783158 +0000 UTC m=+0.431443674 container remove ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:16 np0005580781 systemd[1]: libpod-conmon-ea6b8b56ef0e1e84f8798acc555fc0716469d5437b1b1fc1fe55de955e48c042.scope: Deactivated successfully.
Jan 10 11:57:16 np0005580781 podman[74991]: 2026-01-10 16:57:16.224680981 +0000 UTC m=+0.041751213 container create a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 10 11:57:16 np0005580781 systemd[1]: Started libpod-conmon-a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff.scope.
Jan 10 11:57:16 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 podman[74991]: 2026-01-10 16:57:16.299680816 +0000 UTC m=+0.116751138 container init a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 11:57:16 np0005580781 podman[74991]: 2026-01-10 16:57:16.210665824 +0000 UTC m=+0.027736076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:16 np0005580781 podman[74991]: 2026-01-10 16:57:16.308411874 +0000 UTC m=+0.125482106 container start a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 11:57:16 np0005580781 podman[74991]: 2026-01-10 16:57:16.31147012 +0000 UTC m=+0.128540352 container attach a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 10 11:57:16 np0005580781 sweet_hellman[75008]: 
Jan 10 11:57:16 np0005580781 sweet_hellman[75008]: [global]
Jan 10 11:57:16 np0005580781 sweet_hellman[75008]: #011fsid = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:16 np0005580781 sweet_hellman[75008]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 10 11:57:16 np0005580781 sweet_hellman[75008]: #011osd_crush_chooseleaf_type = 0
Jan 10 11:57:16 np0005580781 systemd[1]: libpod-a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff.scope: Deactivated successfully.
Jan 10 11:57:16 np0005580781 podman[74991]: 2026-01-10 16:57:16.549659528 +0000 UTC m=+0.366729790 container died a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:16 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a691c6c1886a783f431d104110c79e92b747c4deb543b636b479fda00bb37517-merged.mount: Deactivated successfully.
Jan 10 11:57:16 np0005580781 podman[74991]: 2026-01-10 16:57:16.60901877 +0000 UTC m=+0.426089012 container remove a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff (image=quay.io/ceph/ceph:v20, name=sweet_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 11:57:16 np0005580781 systemd[1]: libpod-conmon-a1a64b7b990e2722cc5942f6dd3313660acd2dc60a81fce09e8fbf606f99b5ff.scope: Deactivated successfully.
Jan 10 11:57:16 np0005580781 podman[75045]: 2026-01-10 16:57:16.661136847 +0000 UTC m=+0.033985064 container create f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:16 np0005580781 systemd[1]: Started libpod-conmon-f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1.scope.
Jan 10 11:57:16 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:16 np0005580781 podman[75045]: 2026-01-10 16:57:16.740404872 +0000 UTC m=+0.113253109 container init f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 11:57:16 np0005580781 podman[75045]: 2026-01-10 16:57:16.646167392 +0000 UTC m=+0.019015639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:16 np0005580781 podman[75045]: 2026-01-10 16:57:16.748931134 +0000 UTC m=+0.121779371 container start f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 11:57:16 np0005580781 podman[75045]: 2026-01-10 16:57:16.752551426 +0000 UTC m=+0.125399673 container attach f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: from='client.? 192.168.122.100:0/3973554417' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:57:16 np0005580781 ceph-mon[74900]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015864717' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:57:17 np0005580781 systemd[1]: libpod-f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1.scope: Deactivated successfully.
Jan 10 11:57:17 np0005580781 podman[75045]: 2026-01-10 16:57:17.008833367 +0000 UTC m=+0.381681594 container died f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 11:57:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay-fd9e44e0c93417860a8e16e8f86a3e3f03ea23f600caa6cb190507920d08e008-merged.mount: Deactivated successfully.
Jan 10 11:57:17 np0005580781 podman[75045]: 2026-01-10 16:57:17.059563584 +0000 UTC m=+0.432411811 container remove f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1 (image=quay.io/ceph/ceph:v20, name=sad_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:17 np0005580781 systemd[1]: libpod-conmon-f1956d13a97d4f28c354df51b36e6e6243c53c2c4331f6be0ae3cda9e86543c1.scope: Deactivated successfully.
Jan 10 11:57:17 np0005580781 systemd[1]: Stopping Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:57:17 np0005580781 ceph-mon[74900]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 10 11:57:17 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 10 11:57:17 np0005580781 ceph-mon[74900]: mon.compute-0@0(leader) e1 shutdown
Jan 10 11:57:17 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0[74896]: 2026-01-10T16:57:17.505+0000 7fe694f5f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 10 11:57:17 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0[74896]: 2026-01-10T16:57:17.505+0000 7fe694f5f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 10 11:57:17 np0005580781 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 11:57:17 np0005580781 ceph-mon[74900]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 11:57:17 np0005580781 podman[75129]: 2026-01-10 16:57:17.660592811 +0000 UTC m=+0.460833356 container died fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay-73b51d649077bbebaa9dcafc0ac0cfb2a3594384a6a1f53a28703540f70ab88f-merged.mount: Deactivated successfully.
Jan 10 11:57:17 np0005580781 podman[75129]: 2026-01-10 16:57:17.745795235 +0000 UTC m=+0.546035780 container remove fc0dc41683eedbc6201d5d514ea031b915d9d764138e295128ca72ee12d667a8 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 10 11:57:17 np0005580781 bash[75129]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0
Jan 10 11:57:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 10 11:57:17 np0005580781 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mon.compute-0.service: Deactivated successfully.
Jan 10 11:57:17 np0005580781 systemd[1]: Stopped Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:57:17 np0005580781 systemd[1]: Starting Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:57:18 np0005580781 podman[75230]: 2026-01-10 16:57:18.114317465 +0000 UTC m=+0.047463115 container create 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4312961fc0b9ccac8b0042cd1f5dad4c56fac1f78ca131cba8ae3afdfcf03fd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 podman[75230]: 2026-01-10 16:57:18.091754526 +0000 UTC m=+0.024900156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:18 np0005580781 podman[75230]: 2026-01-10 16:57:18.1981448 +0000 UTC m=+0.131290500 container init 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 11:57:18 np0005580781 podman[75230]: 2026-01-10 16:57:18.208206465 +0000 UTC m=+0.141352115 container start 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:18 np0005580781 bash[75230]: 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7
Jan 10 11:57:18 np0005580781 systemd[1]: Started Ceph mon.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: pidfile_write: ignore empty --pid-file
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: load: jerasure load: lrc 
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Git sha 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: DB SUMMARY
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: DB Session ID:  VPFJD76VNV79HUMFHEYZ
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                                     Options.env: 0x55efa203d440
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                                Options.info_log: 0x55efa2bb3e80
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                                 Options.wal_dir: 
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                    Options.write_buffer_manager: 0x55efa2bfe140
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                               Options.row_cache: None
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                              Options.wal_filter: None
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.wal_compression: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.max_background_jobs: 2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.max_total_wal_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:       Options.compaction_readahead_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Compression algorithms supported:
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kZSTD supported: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:           Options.merge_operator: 
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:        Options.compaction_filter: None
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55efa2c0aa00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55efa2bef8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:        Options.write_buffer_size: 33554432
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:  Options.max_write_buffer_number: 2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:          Options.compression: NoCompression
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.num_levels: 7
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064238277182, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064238282034, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064238, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064238282376, "job": 1, "event": "recovery_finished"}
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55efa2c1ce00
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: DB pointer 0x55efa2d66000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55efa2bef8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???) e1 preinit fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???).mds e1 new map
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-10T16:57:15:771836+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : last_changed 2026-01-10T16:57:13.592121+0000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : created 2026-01-10T16:57:13.592121+0000
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap 
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 10 11:57:18 np0005580781 podman[75250]: 2026-01-10 16:57:18.324876211 +0000 UTC m=+0.067427262 container create df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 11:57:18 np0005580781 systemd[1]: Started libpod-conmon-df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11.scope.
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 10 11:57:18 np0005580781 podman[75250]: 2026-01-10 16:57:18.297369681 +0000 UTC m=+0.039920802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:18 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 podman[75250]: 2026-01-10 16:57:18.430298517 +0000 UTC m=+0.172849588 container init df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 11:57:18 np0005580781 podman[75250]: 2026-01-10 16:57:18.438590742 +0000 UTC m=+0.181141773 container start df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 11:57:18 np0005580781 podman[75250]: 2026-01-10 16:57:18.442560815 +0000 UTC m=+0.185111866 container attach df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 10 11:57:18 np0005580781 systemd[1]: libpod-df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11.scope: Deactivated successfully.
Jan 10 11:57:18 np0005580781 podman[75250]: 2026-01-10 16:57:18.67499733 +0000 UTC m=+0.417548471 container died df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:18 np0005580781 systemd[1]: var-lib-containers-storage-overlay-57afa50e227d5ef495bbd77f26302504f770f0fff0d7ea7255dbb17b5db74a5f-merged.mount: Deactivated successfully.
Jan 10 11:57:18 np0005580781 podman[75250]: 2026-01-10 16:57:18.718973516 +0000 UTC m=+0.461524557 container remove df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11 (image=quay.io/ceph/ceph:v20, name=naughty_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:18 np0005580781 systemd[1]: libpod-conmon-df6846cd9bd69b81b14b0678969522465c4e2743faea850b40fe4dc305ee3b11.scope: Deactivated successfully.
Jan 10 11:57:18 np0005580781 podman[75338]: 2026-01-10 16:57:18.796235905 +0000 UTC m=+0.048011921 container create 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 11:57:18 np0005580781 systemd[1]: Started libpod-conmon-16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3.scope.
Jan 10 11:57:18 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:18 np0005580781 podman[75338]: 2026-01-10 16:57:18.775384034 +0000 UTC m=+0.027160060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:18 np0005580781 podman[75338]: 2026-01-10 16:57:18.892289976 +0000 UTC m=+0.144066022 container init 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:18 np0005580781 podman[75338]: 2026-01-10 16:57:18.897223856 +0000 UTC m=+0.148999842 container start 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 11:57:18 np0005580781 podman[75338]: 2026-01-10 16:57:18.900729885 +0000 UTC m=+0.152505891 container attach 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 10 11:57:19 np0005580781 systemd[1]: libpod-16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3.scope: Deactivated successfully.
Jan 10 11:57:19 np0005580781 podman[75338]: 2026-01-10 16:57:19.158457917 +0000 UTC m=+0.410233933 container died 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:19 np0005580781 systemd[1]: var-lib-containers-storage-overlay-40e189d81349384d2f06289f51c29c8456e6a5c0ede33183dda9acb43205afc3-merged.mount: Deactivated successfully.
Jan 10 11:57:19 np0005580781 podman[75338]: 2026-01-10 16:57:19.21258227 +0000 UTC m=+0.464358296 container remove 16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3 (image=quay.io/ceph/ceph:v20, name=musing_liskov, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 11:57:19 np0005580781 systemd[1]: libpod-conmon-16041761f56f6676f0d0a05c95bc07b6f5d1b31f474fef0156a8d661e8ba38e3.scope: Deactivated successfully.
Jan 10 11:57:19 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:19 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:19 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:19 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:19 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:19 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:19 np0005580781 systemd[1]: Starting Ceph mgr.compute-0.mkxlpr for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:57:20 np0005580781 podman[75519]: 2026-01-10 16:57:20.03605952 +0000 UTC m=+0.064133838 container create 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 11:57:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70cdc44d3161d71786d28e126207e74f502112ea31f741261facd75befa6011/merged/var/lib/ceph/mgr/ceph-compute-0.mkxlpr supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:20 np0005580781 podman[75519]: 2026-01-10 16:57:20.098262532 +0000 UTC m=+0.126336860 container init 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:20 np0005580781 podman[75519]: 2026-01-10 16:57:20.012176193 +0000 UTC m=+0.040250591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:20 np0005580781 podman[75519]: 2026-01-10 16:57:20.110932811 +0000 UTC m=+0.139007129 container start 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:20 np0005580781 bash[75519]: 1966a4894cf3ff35a13e7374e82388f167dfb75f8d810989bdba971104607200
Jan 10 11:57:20 np0005580781 systemd[1]: Started Ceph mgr.compute-0.mkxlpr for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:57:20 np0005580781 ceph-mgr[75538]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:57:20 np0005580781 ceph-mgr[75538]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 10 11:57:20 np0005580781 ceph-mgr[75538]: pidfile_write: ignore empty --pid-file
Jan 10 11:57:20 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'alerts'
Jan 10 11:57:20 np0005580781 podman[75539]: 2026-01-10 16:57:20.202681361 +0000 UTC m=+0.046914581 container create 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:20 np0005580781 systemd[1]: Started libpod-conmon-54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222.scope.
Jan 10 11:57:20 np0005580781 podman[75539]: 2026-01-10 16:57:20.182416806 +0000 UTC m=+0.026650076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:20 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:20 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'balancer'
Jan 10 11:57:20 np0005580781 podman[75539]: 2026-01-10 16:57:20.306351648 +0000 UTC m=+0.150584978 container init 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 11:57:20 np0005580781 podman[75539]: 2026-01-10 16:57:20.31490107 +0000 UTC m=+0.159134300 container start 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 11:57:20 np0005580781 podman[75539]: 2026-01-10 16:57:20.322766823 +0000 UTC m=+0.167000143 container attach 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:57:20 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'cephadm'
Jan 10 11:57:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 11:57:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179129347' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]: 
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]: {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "health": {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "status": "HEALTH_OK",
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "checks": {},
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "mutes": []
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    },
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "election_epoch": 5,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "quorum": [
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        0
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    ],
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "quorum_names": [
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "compute-0"
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    ],
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "quorum_age": 2,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "monmap": {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "epoch": 1,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "min_mon_release_name": "tentacle",
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_mons": 1
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    },
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "osdmap": {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "epoch": 1,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_osds": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_up_osds": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "osd_up_since": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_in_osds": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "osd_in_since": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_remapped_pgs": 0
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    },
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "pgmap": {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "pgs_by_state": [],
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_pgs": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_pools": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_objects": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "data_bytes": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "bytes_used": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "bytes_avail": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "bytes_total": 0
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    },
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "fsmap": {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "epoch": 1,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "by_rank": [],
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "up:standby": 0
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    },
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "mgrmap": {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "available": false,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "num_standbys": 0,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "modules": [
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:            "iostat",
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:            "nfs"
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        ],
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "services": {}
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    },
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "servicemap": {
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "epoch": 1,
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:        "services": {}
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    },
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]:    "progress_events": {}
Jan 10 11:57:20 np0005580781 gifted_ritchie[75576]: }
Jan 10 11:57:20 np0005580781 systemd[1]: libpod-54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222.scope: Deactivated successfully.
Jan 10 11:57:20 np0005580781 podman[75602]: 2026-01-10 16:57:20.634751592 +0000 UTC m=+0.031207356 container died 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 11:57:20 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0a13c99ee4b2ad3300a2e14e4d32cdceccb32081b5881450619e673a81a2166d-merged.mount: Deactivated successfully.
Jan 10 11:57:20 np0005580781 podman[75602]: 2026-01-10 16:57:20.682992588 +0000 UTC m=+0.079448332 container remove 54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222 (image=quay.io/ceph/ceph:v20, name=gifted_ritchie, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:57:20 np0005580781 systemd[1]: libpod-conmon-54ef4c0c6d7d71730ab5aefd8536741cc7fc5fe76d4860d21aebed65a7e37222.scope: Deactivated successfully.
Jan 10 11:57:21 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'crash'
Jan 10 11:57:21 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'dashboard'
Jan 10 11:57:21 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'devicehealth'
Jan 10 11:57:22 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'diskprediction_local'
Jan 10 11:57:22 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 10 11:57:22 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 10 11:57:22 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]:  from numpy import show_config as show_numpy_config
Jan 10 11:57:22 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'influx'
Jan 10 11:57:22 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'insights'
Jan 10 11:57:22 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'iostat'
Jan 10 11:57:22 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'k8sevents'
Jan 10 11:57:22 np0005580781 podman[75629]: 2026-01-10 16:57:22.816658466 +0000 UTC m=+0.095015743 container create a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 11:57:22 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'localpool'
Jan 10 11:57:22 np0005580781 podman[75629]: 2026-01-10 16:57:22.765093295 +0000 UTC m=+0.043450552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:22 np0005580781 systemd[1]: Started libpod-conmon-a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7.scope.
Jan 10 11:57:22 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:22 np0005580781 podman[75629]: 2026-01-10 16:57:22.903255259 +0000 UTC m=+0.181612506 container init a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 11:57:22 np0005580781 podman[75629]: 2026-01-10 16:57:22.909687752 +0000 UTC m=+0.188044989 container start a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:22 np0005580781 podman[75629]: 2026-01-10 16:57:22.916968168 +0000 UTC m=+0.195325415 container attach a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:22 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'mds_autoscaler'
Jan 10 11:57:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 11:57:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499614654' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]: 
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]: {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "health": {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "status": "HEALTH_OK",
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "checks": {},
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "mutes": []
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    },
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "election_epoch": 5,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "quorum": [
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        0
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    ],
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "quorum_names": [
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "compute-0"
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    ],
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "quorum_age": 4,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "monmap": {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "epoch": 1,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "min_mon_release_name": "tentacle",
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_mons": 1
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    },
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "osdmap": {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "epoch": 1,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_osds": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_up_osds": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "osd_up_since": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_in_osds": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "osd_in_since": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_remapped_pgs": 0
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    },
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "pgmap": {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "pgs_by_state": [],
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_pgs": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_pools": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_objects": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "data_bytes": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "bytes_used": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "bytes_avail": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "bytes_total": 0
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    },
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "fsmap": {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "epoch": 1,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "by_rank": [],
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "up:standby": 0
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    },
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "mgrmap": {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "available": false,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "num_standbys": 0,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "modules": [
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:            "iostat",
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:            "nfs"
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        ],
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "services": {}
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    },
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "servicemap": {
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "epoch": 1,
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:        "services": {}
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    },
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]:    "progress_events": {}
Jan 10 11:57:23 np0005580781 vigilant_easley[75646]: }
Jan 10 11:57:23 np0005580781 systemd[1]: libpod-a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7.scope: Deactivated successfully.
Jan 10 11:57:23 np0005580781 podman[75629]: 2026-01-10 16:57:23.131510126 +0000 UTC m=+0.409867363 container died a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:57:23 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'mirroring'
Jan 10 11:57:23 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4c075406721321bc320516b0a2da914497813233ae132dee63eb307e3b1b743c-merged.mount: Deactivated successfully.
Jan 10 11:57:23 np0005580781 podman[75629]: 2026-01-10 16:57:23.241750239 +0000 UTC m=+0.520107486 container remove a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7 (image=quay.io/ceph/ceph:v20, name=vigilant_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:23 np0005580781 systemd[1]: libpod-conmon-a46c3a2770c039323f9354285553d125d107ffe5ec2f122ca35344e8948f8bf7.scope: Deactivated successfully.
Jan 10 11:57:23 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'nfs'
Jan 10 11:57:23 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'orchestrator'
Jan 10 11:57:23 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'osd_perf_query'
Jan 10 11:57:23 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'osd_support'
Jan 10 11:57:23 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'pg_autoscaler'
Jan 10 11:57:24 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'progress'
Jan 10 11:57:24 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'prometheus'
Jan 10 11:57:24 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'rbd_support'
Jan 10 11:57:24 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'rgw'
Jan 10 11:57:24 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'rook'
Jan 10 11:57:25 np0005580781 podman[75683]: 2026-01-10 16:57:25.375400863 +0000 UTC m=+0.096274874 container create f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 11:57:25 np0005580781 systemd[1]: Started libpod-conmon-f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f.scope.
Jan 10 11:57:25 np0005580781 podman[75683]: 2026-01-10 16:57:25.329574904 +0000 UTC m=+0.050448975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:25 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:25 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'selftest'
Jan 10 11:57:25 np0005580781 podman[75683]: 2026-01-10 16:57:25.468553211 +0000 UTC m=+0.189427222 container init f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:25 np0005580781 podman[75683]: 2026-01-10 16:57:25.474091882 +0000 UTC m=+0.194965883 container start f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:25 np0005580781 podman[75683]: 2026-01-10 16:57:25.477926376 +0000 UTC m=+0.198800417 container attach f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:25 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'smb'
Jan 10 11:57:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 11:57:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317659530' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 11:57:25 np0005580781 competent_poitras[75699]: 
Jan 10 11:57:25 np0005580781 competent_poitras[75699]: {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "health": {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "status": "HEALTH_OK",
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "checks": {},
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "mutes": []
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    },
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "election_epoch": 5,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "quorum": [
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        0
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    ],
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "quorum_names": [
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "compute-0"
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    ],
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "quorum_age": 7,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "monmap": {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "epoch": 1,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "min_mon_release_name": "tentacle",
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_mons": 1
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    },
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "osdmap": {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "epoch": 1,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_osds": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_up_osds": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "osd_up_since": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_in_osds": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "osd_in_since": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_remapped_pgs": 0
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    },
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "pgmap": {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "pgs_by_state": [],
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_pgs": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_pools": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_objects": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "data_bytes": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "bytes_used": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "bytes_avail": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "bytes_total": 0
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    },
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "fsmap": {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "epoch": 1,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "by_rank": [],
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "up:standby": 0
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    },
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "mgrmap": {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "available": false,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "num_standbys": 0,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "modules": [
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:            "iostat",
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:            "nfs"
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        ],
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "services": {}
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    },
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "servicemap": {
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "epoch": 1,
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:        "services": {}
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    },
Jan 10 11:57:25 np0005580781 competent_poitras[75699]:    "progress_events": {}
Jan 10 11:57:25 np0005580781 competent_poitras[75699]: }
Jan 10 11:57:25 np0005580781 systemd[1]: libpod-f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f.scope: Deactivated successfully.
Jan 10 11:57:25 np0005580781 podman[75683]: 2026-01-10 16:57:25.720077304 +0000 UTC m=+0.440951295 container died f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 11:57:25 np0005580781 systemd[1]: var-lib-containers-storage-overlay-41e7c1d89eab50eb1ff0fa45ca52a353153d5e6ed45520349b657375258e9180-merged.mount: Deactivated successfully.
Jan 10 11:57:25 np0005580781 podman[75683]: 2026-01-10 16:57:25.756340592 +0000 UTC m=+0.477214583 container remove f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f (image=quay.io/ceph/ceph:v20, name=competent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 11:57:25 np0005580781 systemd[1]: libpod-conmon-f936378ba963b1b0d2b5a21d6043daa07169fe2c814365ce606f3a6095afec3f.scope: Deactivated successfully.
Jan 10 11:57:25 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'snap_schedule'
Jan 10 11:57:25 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'stats'
Jan 10 11:57:25 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'status'
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'telegraf'
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'telemetry'
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'test_orchestrator'
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'volumes'
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: ms_deliver_dispatch: unhandled message 0x55c8a2a2b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mkxlpr
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mkxlpr(active, starting, since 0.0100662s)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr handle_mgr_map Activating!
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr handle_mgr_map I am now activating
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata"} : dispatch
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e1 all = 1
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata"} : dispatch
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} : dispatch
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: balancer
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Manager daemon compute-0.mkxlpr is now available
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [balancer INFO root] Starting
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: crash
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: devicehealth
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:57:26
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [balancer INFO root] No pools available
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] Starting
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: iostat
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: nfs
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: orchestrator
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: pg_autoscaler
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: progress
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [progress INFO root] Loading...
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [progress INFO root] No stored events to load
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [progress INFO root] Loaded [] historic events
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [progress INFO root] Loaded OSDMap, ready.
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] recovery thread starting
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] starting setup
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: rbd_support
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: Activating manager daemon compute-0.mkxlpr
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: Manager daemon compute-0.mkxlpr is now available
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: status
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: telemetry
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] PerfHandler: starting
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TaskHandler: starting
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} v 0)
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 11:57:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 10 11:57:26 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] setup complete
Jan 10 11:57:27 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: volumes
Jan 10 11:57:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:27 np0005580781 podman[75815]: 2026-01-10 16:57:27.863755172 +0000 UTC m=+0.069134644 container create 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 11:57:27 np0005580781 systemd[1]: Started libpod-conmon-2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd.scope.
Jan 10 11:57:27 np0005580781 podman[75815]: 2026-01-10 16:57:27.834493215 +0000 UTC m=+0.039872747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:27 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:27 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mkxlpr(active, since 1.03538s)
Jan 10 11:57:27 np0005580781 podman[75815]: 2026-01-10 16:57:27.967424587 +0000 UTC m=+0.172804099 container init 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:27 np0005580781 podman[75815]: 2026-01-10 16:57:27.974036887 +0000 UTC m=+0.179416339 container start 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 11:57:27 np0005580781 podman[75815]: 2026-01-10 16:57:27.978513349 +0000 UTC m=+0.183892851 container attach 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 11:57:27 np0005580781 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 11:57:27 np0005580781 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:27 np0005580781 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:27 np0005580781 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 11:57:27 np0005580781 ceph-mon[75249]: from='mgr.14102 192.168.122.100:0/1838997223' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 10 11:57:28 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128525717' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 10 11:57:28 np0005580781 focused_joliot[75831]: 
Jan 10 11:57:28 np0005580781 focused_joliot[75831]: {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "health": {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "status": "HEALTH_OK",
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "checks": {},
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "mutes": []
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    },
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "election_epoch": 5,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "quorum": [
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        0
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    ],
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "quorum_names": [
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "compute-0"
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    ],
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "quorum_age": 10,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "monmap": {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "epoch": 1,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "min_mon_release_name": "tentacle",
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_mons": 1
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    },
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "osdmap": {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "epoch": 1,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_osds": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_up_osds": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "osd_up_since": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_in_osds": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "osd_in_since": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_remapped_pgs": 0
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    },
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "pgmap": {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "pgs_by_state": [],
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_pgs": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_pools": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_objects": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "data_bytes": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "bytes_used": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "bytes_avail": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "bytes_total": 0
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    },
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "fsmap": {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "epoch": 1,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "btime": "2026-01-10T16:57:15:771836+0000",
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "by_rank": [],
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "up:standby": 0
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    },
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "mgrmap": {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "available": true,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "num_standbys": 0,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "modules": [
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:            "iostat",
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:            "nfs"
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        ],
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "services": {}
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    },
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "servicemap": {
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "epoch": 1,
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "modified": "2026-01-10T16:57:15.774565+0000",
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:        "services": {}
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    },
Jan 10 11:57:28 np0005580781 focused_joliot[75831]:    "progress_events": {}
Jan 10 11:57:28 np0005580781 focused_joliot[75831]: }
Jan 10 11:57:28 np0005580781 systemd[1]: libpod-2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd.scope: Deactivated successfully.
Jan 10 11:57:28 np0005580781 podman[75815]: 2026-01-10 16:57:28.550647997 +0000 UTC m=+0.756027469 container died 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 11:57:28 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a8e570c08fd7c6d4d55d41af1137ce276a2dd87ffab3f7a8713a780524794474-merged.mount: Deactivated successfully.
Jan 10 11:57:28 np0005580781 podman[75815]: 2026-01-10 16:57:28.602936202 +0000 UTC m=+0.808315664 container remove 2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd (image=quay.io/ceph/ceph:v20, name=focused_joliot, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 11:57:28 np0005580781 systemd[1]: libpod-conmon-2de9c15466be09f366cb9de288e0a5f43945ceafcf1fb8af98059d47625827bd.scope: Deactivated successfully.
Jan 10 11:57:28 np0005580781 podman[75871]: 2026-01-10 16:57:28.708962571 +0000 UTC m=+0.068854138 container create 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:28 np0005580781 systemd[1]: Started libpod-conmon-1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3.scope.
Jan 10 11:57:28 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:28 np0005580781 podman[75871]: 2026-01-10 16:57:28.684752041 +0000 UTC m=+0.044643638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:28 np0005580781 podman[75871]: 2026-01-10 16:57:28.790285236 +0000 UTC m=+0.150176833 container init 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 11:57:28 np0005580781 podman[75871]: 2026-01-10 16:57:28.795462967 +0000 UTC m=+0.155354534 container start 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 11:57:28 np0005580781 podman[75871]: 2026-01-10 16:57:28.798228443 +0000 UTC m=+0.158120010 container attach 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:28 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:28 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:28 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mkxlpr(active, since 2s)
Jan 10 11:57:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 10 11:57:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862259952' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 11:57:29 np0005580781 great_jennings[75887]: 
Jan 10 11:57:29 np0005580781 great_jennings[75887]: [global]
Jan 10 11:57:29 np0005580781 great_jennings[75887]: #011fsid = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4
Jan 10 11:57:29 np0005580781 great_jennings[75887]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 10 11:57:29 np0005580781 great_jennings[75887]: #011osd_crush_chooseleaf_type = 0
Jan 10 11:57:29 np0005580781 systemd[1]: libpod-1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3.scope: Deactivated successfully.
Jan 10 11:57:29 np0005580781 podman[75871]: 2026-01-10 16:57:29.238983212 +0000 UTC m=+0.598874769 container died 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:29 np0005580781 systemd[1]: var-lib-containers-storage-overlay-80f083adee4015f60fc3bddeca3ab8af3b22edd5c02a370a57f624379b3111c4-merged.mount: Deactivated successfully.
Jan 10 11:57:29 np0005580781 podman[75871]: 2026-01-10 16:57:29.280898044 +0000 UTC m=+0.640789601 container remove 1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3 (image=quay.io/ceph/ceph:v20, name=great_jennings, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:29 np0005580781 systemd[1]: libpod-conmon-1f98089b44288f4004861b2e36fea92bcac2ff210460b873484e7d189f5374d3.scope: Deactivated successfully.
Jan 10 11:57:29 np0005580781 podman[75925]: 2026-01-10 16:57:29.354728426 +0000 UTC m=+0.051564996 container create 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:29 np0005580781 systemd[1]: Started libpod-conmon-897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732.scope.
Jan 10 11:57:29 np0005580781 podman[75925]: 2026-01-10 16:57:29.329637222 +0000 UTC m=+0.026473822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:29 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:29 np0005580781 podman[75925]: 2026-01-10 16:57:29.455559433 +0000 UTC m=+0.152396013 container init 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 11:57:29 np0005580781 podman[75925]: 2026-01-10 16:57:29.462140482 +0000 UTC m=+0.158977062 container start 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 11:57:29 np0005580781 podman[75925]: 2026-01-10 16:57:29.465671338 +0000 UTC m=+0.162507948 container attach 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 10 11:57:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 10 11:57:29 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3862259952' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 10 11:57:29 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 10 11:57:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  1: '-n'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  2: 'mgr.compute-0.mkxlpr'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  3: '-f'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  4: '--setuser'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  5: 'ceph'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  6: '--setgroup'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  7: 'ceph'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  8: '--default-log-to-file=false'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  9: '--default-log-to-journald=true'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr respawn  exe_path /proc/self/exe
Jan 10 11:57:30 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mkxlpr(active, since 3s)
Jan 10 11:57:30 np0005580781 systemd[1]: libpod-897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732.scope: Deactivated successfully.
Jan 10 11:57:30 np0005580781 podman[75925]: 2026-01-10 16:57:30.056552088 +0000 UTC m=+0.753388648 container died 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 11:57:30 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e05978f6a659ec31b69073fd580305c7928f44e01f9e21662cc71d2662006c3e-merged.mount: Deactivated successfully.
Jan 10 11:57:30 np0005580781 podman[75925]: 2026-01-10 16:57:30.130178424 +0000 UTC m=+0.827014984 container remove 897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732 (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 11:57:30 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: ignoring --setuser ceph since I am not root
Jan 10 11:57:30 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: ignoring --setgroup ceph since I am not root
Jan 10 11:57:30 np0005580781 systemd[1]: libpod-conmon-897851a6c41b24cadc2d60d25c14ebf44fcb881c951424af796f55a130727732.scope: Deactivated successfully.
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: pidfile_write: ignore empty --pid-file
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'alerts'
Jan 10 11:57:30 np0005580781 podman[75988]: 2026-01-10 16:57:30.202884555 +0000 UTC m=+0.049573252 container create 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 11:57:30 np0005580781 podman[75988]: 2026-01-10 16:57:30.180482244 +0000 UTC m=+0.027170961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:30 np0005580781 systemd[1]: Started libpod-conmon-19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85.scope.
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'balancer'
Jan 10 11:57:30 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:30 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'cephadm'
Jan 10 11:57:30 np0005580781 podman[75988]: 2026-01-10 16:57:30.425614994 +0000 UTC m=+0.272303721 container init 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:30 np0005580781 podman[75988]: 2026-01-10 16:57:30.432048949 +0000 UTC m=+0.278737646 container start 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:30 np0005580781 podman[75988]: 2026-01-10 16:57:30.454413478 +0000 UTC m=+0.301102175 container attach 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 10 11:57:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4146363671' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 11:57:30 np0005580781 eager_lichterman[76016]: {
Jan 10 11:57:30 np0005580781 eager_lichterman[76016]:    "epoch": 5,
Jan 10 11:57:30 np0005580781 eager_lichterman[76016]:    "available": true,
Jan 10 11:57:30 np0005580781 eager_lichterman[76016]:    "active_name": "compute-0.mkxlpr",
Jan 10 11:57:30 np0005580781 eager_lichterman[76016]:    "num_standby": 0
Jan 10 11:57:30 np0005580781 eager_lichterman[76016]: }
Jan 10 11:57:30 np0005580781 systemd[1]: libpod-19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85.scope: Deactivated successfully.
Jan 10 11:57:30 np0005580781 podman[75988]: 2026-01-10 16:57:30.97933139 +0000 UTC m=+0.826020087 container died 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 11:57:31 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2571123315' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 10 11:57:31 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1917c5a6443856320d0accb6a79726639d9459c185e046236540055b2f171c5e-merged.mount: Deactivated successfully.
Jan 10 11:57:31 np0005580781 podman[75988]: 2026-01-10 16:57:31.164626129 +0000 UTC m=+1.011314826 container remove 19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85 (image=quay.io/ceph/ceph:v20, name=eager_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 11:57:31 np0005580781 systemd[1]: libpod-conmon-19cf23cc3f52b24869726c16a9e2a4b1ebe7fd7d8f2d8099ebb1740aee795f85.scope: Deactivated successfully.
Jan 10 11:57:31 np0005580781 podman[76067]: 2026-01-10 16:57:31.237640568 +0000 UTC m=+0.048692677 container create 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 11:57:31 np0005580781 systemd[1]: Started libpod-conmon-5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc.scope.
Jan 10 11:57:31 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:31 np0005580781 podman[76067]: 2026-01-10 16:57:31.305338503 +0000 UTC m=+0.116390632 container init 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:31 np0005580781 podman[76067]: 2026-01-10 16:57:31.312044166 +0000 UTC m=+0.123096265 container start 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 11:57:31 np0005580781 podman[76067]: 2026-01-10 16:57:31.219457293 +0000 UTC m=+0.030509422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:31 np0005580781 podman[76067]: 2026-01-10 16:57:31.315373536 +0000 UTC m=+0.126425645 container attach 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 11:57:31 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'crash'
Jan 10 11:57:31 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'dashboard'
Jan 10 11:57:32 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'devicehealth'
Jan 10 11:57:32 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'diskprediction_local'
Jan 10 11:57:32 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 10 11:57:32 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 10 11:57:32 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]:  from numpy import show_config as show_numpy_config
Jan 10 11:57:32 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'influx'
Jan 10 11:57:32 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'insights'
Jan 10 11:57:32 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'iostat'
Jan 10 11:57:32 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'k8sevents'
Jan 10 11:57:33 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'localpool'
Jan 10 11:57:33 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'mds_autoscaler'
Jan 10 11:57:33 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'mirroring'
Jan 10 11:57:33 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'nfs'
Jan 10 11:57:33 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'orchestrator'
Jan 10 11:57:34 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'osd_perf_query'
Jan 10 11:57:34 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'osd_support'
Jan 10 11:57:34 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'pg_autoscaler'
Jan 10 11:57:34 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'progress'
Jan 10 11:57:34 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'prometheus'
Jan 10 11:57:34 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'rbd_support'
Jan 10 11:57:34 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'rgw'
Jan 10 11:57:35 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'rook'
Jan 10 11:57:35 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'selftest'
Jan 10 11:57:35 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'smb'
Jan 10 11:57:36 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'snap_schedule'
Jan 10 11:57:36 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'stats'
Jan 10 11:57:36 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'status'
Jan 10 11:57:36 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'telegraf'
Jan 10 11:57:36 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'telemetry'
Jan 10 11:57:36 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'test_orchestrator'
Jan 10 11:57:36 np0005580781 ceph-mgr[75538]: mgr[py] Loading python module 'volumes'
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mkxlpr restarted
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mkxlpr
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: ms_deliver_dispatch: unhandled message 0x55a9b2db6000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: mgr handle_mgr_map Activating!
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: mgr handle_mgr_map I am now activating
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mkxlpr(active, starting, since 0.738038s)
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: Active manager daemon compute-0.mkxlpr restarted
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: Activating manager daemon compute-0.mkxlpr
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} v 0)
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr metadata", "who": "compute-0.mkxlpr", "id": "compute-0.mkxlpr"} : dispatch
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata"} : dispatch
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e1 all = 1
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata"} : dispatch
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata"} : dispatch
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: balancer
Jan 10 11:57:37 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Manager daemon compute-0.mkxlpr is now available
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Starting
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:57:37
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 11:57:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] No pools available
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019918966 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: cephadm
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: crash
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: devicehealth
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] Starting
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: iostat
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: nfs
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: orchestrator
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: pg_autoscaler
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: progress
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [progress INFO root] Loading...
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [progress INFO root] No stored events to load
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [progress INFO root] Loaded [] historic events
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [progress INFO root] Loaded OSDMap, ready.
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] recovery thread starting
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] starting setup
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: rbd_support
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: status
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: telemetry
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] PerfHandler: starting
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TaskHandler: starting
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} v 0)
Jan 10 11:57:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] setup complete
Jan 10 11:57:38 np0005580781 ceph-mgr[75538]: mgr load Constructed class from module: volumes
Jan 10 11:57:39 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Bus STARTING
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Bus STARTING
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Serving on https://192.168.122.100:7150
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Serving on https://192.168.122.100:7150
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Client ('192.168.122.100', 44532) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Client ('192.168.122.100', 44532) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Serving on http://192.168.122.100:8765
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Serving on http://192.168.122.100:8765
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: [cephadm INFO cherrypy.error] [10/Jan/2026:16:57:40] ENGINE Bus STARTED
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : [10/Jan/2026:16:57:40] ENGINE Bus STARTED
Jan 10 11:57:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 11:57:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 11:57:40 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:41 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 10 11:57:41 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mkxlpr(active, since 4s)
Jan 10 11:57:41 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 10 11:57:41 np0005580781 bold_mcnulty[76083]: {
Jan 10 11:57:41 np0005580781 bold_mcnulty[76083]:    "mgrmap_epoch": 7,
Jan 10 11:57:41 np0005580781 bold_mcnulty[76083]:    "initialized": true
Jan 10 11:57:41 np0005580781 bold_mcnulty[76083]: }
Jan 10 11:57:41 np0005580781 systemd[1]: libpod-5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc.scope: Deactivated successfully.
Jan 10 11:57:41 np0005580781 podman[76067]: 2026-01-10 16:57:41.648216168 +0000 UTC m=+10.459268287 container died 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:41 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: Manager daemon compute-0.mkxlpr is now available
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/mirror_snapshot_schedule"} : dispatch
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mkxlpr/trash_purge_schedule"} : dispatch
Jan 10 11:57:42 np0005580781 systemd[1]: var-lib-containers-storage-overlay-452c29ec949b64f9a729075ea8d796d2cbd91490fe5194c49e64301606622e2a-merged.mount: Deactivated successfully.
Jan 10 11:57:42 np0005580781 podman[76067]: 2026-01-10 16:57:42.443944019 +0000 UTC m=+11.254996128 container remove 5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc (image=quay.io/ceph/ceph:v20, name=bold_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 11:57:42 np0005580781 systemd[1]: libpod-conmon-5aa676cdc0e8a89e0afc47ddf61416ffa40418c478b04a3684e5684c2ce493fc.scope: Deactivated successfully.
Jan 10 11:57:42 np0005580781 podman[76255]: 2026-01-10 16:57:42.526258522 +0000 UTC m=+0.052864581 container create ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 10 11:57:42 np0005580781 systemd[1]: Started libpod-conmon-ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a.scope.
Jan 10 11:57:42 np0005580781 podman[76255]: 2026-01-10 16:57:42.499127833 +0000 UTC m=+0.025733922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:42 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:42 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mkxlpr(active, since 5s)
Jan 10 11:57:42 np0005580781 podman[76255]: 2026-01-10 16:57:42.624441656 +0000 UTC m=+0.151047755 container init ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 11:57:42 np0005580781 podman[76255]: 2026-01-10 16:57:42.635153198 +0000 UTC m=+0.161759267 container start ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:42 np0005580781 podman[76255]: 2026-01-10 16:57:42.639219639 +0000 UTC m=+0.165825758 container attach ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:42 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052876 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: Found migration_current of "None". Setting to last migration.
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Bus STARTING
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Serving on https://192.168.122.100:7150
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Client ('192.168.122.100', 44532) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Serving on http://192.168.122.100:8765
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: [10/Jan/2026:16:57:40] ENGINE Bus STARTED
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 10 11:57:43 np0005580781 stoic_kepler[76270]: module 'orchestrator' is already enabled (always-on)
Jan 10 11:57:43 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mkxlpr(active, since 6s)
Jan 10 11:57:43 np0005580781 systemd[1]: libpod-ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a.scope: Deactivated successfully.
Jan 10 11:57:43 np0005580781 podman[76255]: 2026-01-10 16:57:43.633244083 +0000 UTC m=+1.159850132 container died ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 11:57:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9e3158fcf75487419cc35fd32097b3bf39ebaf0b97cf3bcb820f642bd848d6a3-merged.mount: Deactivated successfully.
Jan 10 11:57:43 np0005580781 podman[76255]: 2026-01-10 16:57:43.671422053 +0000 UTC m=+1.198028112 container remove ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a (image=quay.io/ceph/ceph:v20, name=stoic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:57:43 np0005580781 systemd[1]: libpod-conmon-ed2abd28ee0d1289cf4aee86c8c8b78353f0d153fce4e56d7a7acb7488d0bb7a.scope: Deactivated successfully.
Jan 10 11:57:43 np0005580781 podman[76310]: 2026-01-10 16:57:43.769113855 +0000 UTC m=+0.068133468 container create 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 11:57:43 np0005580781 systemd[1]: Started libpod-conmon-6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f.scope.
Jan 10 11:57:43 np0005580781 podman[76310]: 2026-01-10 16:57:43.741634436 +0000 UTC m=+0.040654059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:43 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:43 np0005580781 podman[76310]: 2026-01-10 16:57:43.870057465 +0000 UTC m=+0.169077098 container init 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 11:57:43 np0005580781 podman[76310]: 2026-01-10 16:57:43.882209416 +0000 UTC m=+0.181229009 container start 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 11:57:43 np0005580781 podman[76310]: 2026-01-10 16:57:43.885743133 +0000 UTC m=+0.184762716 container attach 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:43 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:44 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 10 11:57:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 11:57:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 11:57:44 np0005580781 systemd[1]: libpod-6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f.scope: Deactivated successfully.
Jan 10 11:57:44 np0005580781 podman[76310]: 2026-01-10 16:57:44.407986042 +0000 UTC m=+0.707005625 container died 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:44 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9952ba6ac0517f54bf653e61cd50ab105e765d99266125826e9eeb81dd8ec2ea-merged.mount: Deactivated successfully.
Jan 10 11:57:44 np0005580781 podman[76310]: 2026-01-10 16:57:44.447508649 +0000 UTC m=+0.746528232 container remove 6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f (image=quay.io/ceph/ceph:v20, name=dazzling_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 11:57:44 np0005580781 systemd[1]: libpod-conmon-6a74cccb75ba1080be29df75c5f03afffecf3bc4e14563c2d31fa5bd6bafcb5f.scope: Deactivated successfully.
Jan 10 11:57:44 np0005580781 podman[76364]: 2026-01-10 16:57:44.520897499 +0000 UTC m=+0.049800578 container create 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:44 np0005580781 systemd[1]: Started libpod-conmon-79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d.scope.
Jan 10 11:57:44 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:44 np0005580781 podman[76364]: 2026-01-10 16:57:44.496852553 +0000 UTC m=+0.025755702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:44 np0005580781 podman[76364]: 2026-01-10 16:57:44.602248595 +0000 UTC m=+0.131151694 container init 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:44 np0005580781 podman[76364]: 2026-01-10 16:57:44.609906634 +0000 UTC m=+0.138809703 container start 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 11:57:44 np0005580781 podman[76364]: 2026-01-10 16:57:44.61563279 +0000 UTC m=+0.144535869 container attach 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 11:57:44 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/840503998' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 10 11:57:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:44 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 10 11:57:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_user
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 10 11:57:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 10 11:57:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_config
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 10 11:57:45 np0005580781 busy_faraday[76380]: ssh user set to ceph-admin. sudo will be used
Jan 10 11:57:45 np0005580781 systemd[1]: libpod-79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d.scope: Deactivated successfully.
Jan 10 11:57:45 np0005580781 podman[76364]: 2026-01-10 16:57:45.075591212 +0000 UTC m=+0.604494391 container died 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 11:57:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-03b79f61d8ad37e8b75bd3ae861bd95335126093f5b2ab7afcfff1d02917eb5b-merged.mount: Deactivated successfully.
Jan 10 11:57:45 np0005580781 podman[76364]: 2026-01-10 16:57:45.122813709 +0000 UTC m=+0.651716798 container remove 79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d (image=quay.io/ceph/ceph:v20, name=busy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 11:57:45 np0005580781 systemd[1]: libpod-conmon-79dccadedc093b6b44c5b66e0a3079ef5534fb364a8c5cc468be5d742e4e898d.scope: Deactivated successfully.
Jan 10 11:57:45 np0005580781 podman[76417]: 2026-01-10 16:57:45.198533012 +0000 UTC m=+0.049743727 container create 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:45 np0005580781 systemd[1]: Started libpod-conmon-3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e.scope.
Jan 10 11:57:45 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 podman[76417]: 2026-01-10 16:57:45.257653813 +0000 UTC m=+0.108864508 container init 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 11:57:45 np0005580781 podman[76417]: 2026-01-10 16:57:45.268420856 +0000 UTC m=+0.119631551 container start 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 11:57:45 np0005580781 podman[76417]: 2026-01-10 16:57:45.177400286 +0000 UTC m=+0.028611001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:45 np0005580781 podman[76417]: 2026-01-10 16:57:45.27259597 +0000 UTC m=+0.123806665 container attach 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 10 11:57:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Set ssh private key
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 10 11:57:45 np0005580781 systemd[1]: libpod-3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e.scope: Deactivated successfully.
Jan 10 11:57:45 np0005580781 podman[76417]: 2026-01-10 16:57:45.700438337 +0000 UTC m=+0.551649012 container died 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 10 11:57:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-758f0cc8b487db06d8107546412f38bc9e86085d21fb93972d075cf8d9dad0ed-merged.mount: Deactivated successfully.
Jan 10 11:57:45 np0005580781 podman[76417]: 2026-01-10 16:57:45.767639208 +0000 UTC m=+0.618849883 container remove 3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e (image=quay.io/ceph/ceph:v20, name=trusting_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:45 np0005580781 systemd[1]: libpod-conmon-3686a0d717c074238152d28d82f68d68925b9bf053b215881273cb114c88e10e.scope: Deactivated successfully.
Jan 10 11:57:45 np0005580781 podman[76466]: 2026-01-10 16:57:45.828477796 +0000 UTC m=+0.037910754 container create 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 11:57:45 np0005580781 systemd[1]: Started libpod-conmon-781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331.scope.
Jan 10 11:57:45 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:45 np0005580781 podman[76466]: 2026-01-10 16:57:45.895116661 +0000 UTC m=+0.104549629 container init 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:45 np0005580781 podman[76466]: 2026-01-10 16:57:45.899802699 +0000 UTC m=+0.109235677 container start 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:45 np0005580781 podman[76466]: 2026-01-10 16:57:45.904444466 +0000 UTC m=+0.113877444 container attach 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 11:57:45 np0005580781 podman[76466]: 2026-01-10 16:57:45.813006694 +0000 UTC m=+0.022439652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:45 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: Set ssh ssh_user
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: Set ssh ssh_config
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: ssh user set to ceph-admin. sudo will be used
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:46 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 10 11:57:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:46 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 10 11:57:46 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 10 11:57:46 np0005580781 systemd[1]: libpod-781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331.scope: Deactivated successfully.
Jan 10 11:57:46 np0005580781 podman[76466]: 2026-01-10 16:57:46.315556306 +0000 UTC m=+0.524989264 container died 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:57:46 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a82b5fbe23a4a32652f440bfa9f5113c917699fbf4696736d576647a74396761-merged.mount: Deactivated successfully.
Jan 10 11:57:46 np0005580781 podman[76466]: 2026-01-10 16:57:46.424119724 +0000 UTC m=+0.633552682 container remove 781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331 (image=quay.io/ceph/ceph:v20, name=xenodochial_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 11:57:46 np0005580781 systemd[1]: libpod-conmon-781f0a5b7e29ab433e605cd4dd90f1e972408d8dfb8b33f29f3c880a35af8331.scope: Deactivated successfully.
Jan 10 11:57:46 np0005580781 podman[76525]: 2026-01-10 16:57:46.488773976 +0000 UTC m=+0.045609394 container create 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:46 np0005580781 systemd[1]: Started libpod-conmon-5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035.scope.
Jan 10 11:57:46 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:46 np0005580781 podman[76525]: 2026-01-10 16:57:46.464575436 +0000 UTC m=+0.021410944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:46 np0005580781 podman[76525]: 2026-01-10 16:57:46.56452567 +0000 UTC m=+0.121361118 container init 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:46 np0005580781 podman[76525]: 2026-01-10 16:57:46.570561384 +0000 UTC m=+0.127396812 container start 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:46 np0005580781 podman[76525]: 2026-01-10 16:57:46.57480552 +0000 UTC m=+0.131640938 container attach 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 11:57:46 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:47 np0005580781 ceph-mon[75249]: Set ssh ssh_identity_key
Jan 10 11:57:47 np0005580781 ceph-mon[75249]: Set ssh private key
Jan 10 11:57:47 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:47 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:47 np0005580781 serene_meitner[76542]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrpqUxP+rrDq/Ne49spAJer50GELazLs3h1Q1nSPstwxnA2ih3Wxc7ipm12Sm1nyI+Pq8x+KeJfj8IpMWVJpnmQgku7OTLPpBCMUgZqvskdS1H2Lo01SrPTBNx3RSKKPUYYjNr2DxupexnUOZYFwiOGp9jEEzNk4MSgMtVcbTdfsfcmKdVlHjFydPyM0m2/FrIviz1xEQHZcABIIMi1tW0wmhobTPykznW/rR0uMofhcN8Ktm20+RCa5/1KZ800IrngzeRoXPZuq10fqggUWj0mJJ9MfqSz2dOblfXIYKAO7QA+vJd1s92aBmtAORIFSXqs6pGZcuml5k1iJb8gHy/FOl4u/jVrcBfDoC6g7CEmIksIVSAHsRWhzYiZNbBf2pJQjwzzSTBxh7T2deblHWj1XFJxdfNNeQacucZThihEExtBiXou8QGNVNs5s8Oe4pE+gjOhim955mz3GivSHu8b0T44AWrjaB1p6W28JvYYhl4DYSfw6kawGZpupmbDEc= zuul@controller
Jan 10 11:57:47 np0005580781 systemd[1]: libpod-5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035.scope: Deactivated successfully.
Jan 10 11:57:47 np0005580781 podman[76525]: 2026-01-10 16:57:47.097875432 +0000 UTC m=+0.654710870 container died 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 11:57:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay-15da8e97256aa69cba98cadf0142516b270959364cce38a7b76dab91b5d8d6b8-merged.mount: Deactivated successfully.
Jan 10 11:57:47 np0005580781 podman[76525]: 2026-01-10 16:57:47.144923514 +0000 UTC m=+0.701758942 container remove 5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035 (image=quay.io/ceph/ceph:v20, name=serene_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:47 np0005580781 systemd[1]: libpod-conmon-5c46fdc8ebd1cae46842ece0c9f6d970417b0ab91c477354981ec679e7079035.scope: Deactivated successfully.
Jan 10 11:57:47 np0005580781 podman[76579]: 2026-01-10 16:57:47.24937201 +0000 UTC m=+0.074762048 container create 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 11:57:47 np0005580781 systemd[1]: Started libpod-conmon-2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0.scope.
Jan 10 11:57:47 np0005580781 podman[76579]: 2026-01-10 16:57:47.218649703 +0000 UTC m=+0.044039791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:47 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:47 np0005580781 podman[76579]: 2026-01-10 16:57:47.341254393 +0000 UTC m=+0.166644491 container init 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 11:57:47 np0005580781 podman[76579]: 2026-01-10 16:57:47.346985059 +0000 UTC m=+0.172375107 container start 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 11:57:47 np0005580781 podman[76579]: 2026-01-10 16:57:47.351494402 +0000 UTC m=+0.176884400 container attach 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:47 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:47 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:48 np0005580781 systemd-logind[798]: New session 21 of user ceph-admin.
Jan 10 11:57:48 np0005580781 systemd[1]: Created slice User Slice of UID 42477.
Jan 10 11:57:48 np0005580781 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 10 11:57:48 np0005580781 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 10 11:57:48 np0005580781 ceph-mon[75249]: Set ssh ssh_identity_pub
Jan 10 11:57:48 np0005580781 systemd[1]: Starting User Manager for UID 42477...
Jan 10 11:57:48 np0005580781 systemd-logind[798]: New session 23 of user ceph-admin.
Jan 10 11:57:48 np0005580781 systemd[76625]: Queued start job for default target Main User Target.
Jan 10 11:57:48 np0005580781 systemd[76625]: Created slice User Application Slice.
Jan 10 11:57:48 np0005580781 systemd[76625]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 10 11:57:48 np0005580781 systemd[76625]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 11:57:48 np0005580781 systemd[76625]: Reached target Paths.
Jan 10 11:57:48 np0005580781 systemd[76625]: Reached target Timers.
Jan 10 11:57:48 np0005580781 systemd[76625]: Starting D-Bus User Message Bus Socket...
Jan 10 11:57:48 np0005580781 systemd[76625]: Starting Create User's Volatile Files and Directories...
Jan 10 11:57:48 np0005580781 systemd[76625]: Listening on D-Bus User Message Bus Socket.
Jan 10 11:57:48 np0005580781 systemd[76625]: Reached target Sockets.
Jan 10 11:57:48 np0005580781 systemd[76625]: Finished Create User's Volatile Files and Directories.
Jan 10 11:57:48 np0005580781 systemd[76625]: Reached target Basic System.
Jan 10 11:57:48 np0005580781 systemd[76625]: Reached target Main User Target.
Jan 10 11:57:48 np0005580781 systemd[76625]: Startup finished in 181ms.
Jan 10 11:57:48 np0005580781 systemd[1]: Started User Manager for UID 42477.
Jan 10 11:57:48 np0005580781 systemd[1]: Started Session 21 of User ceph-admin.
Jan 10 11:57:48 np0005580781 systemd[1]: Started Session 23 of User ceph-admin.
Jan 10 11:57:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054706 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:57:48 np0005580781 systemd-logind[798]: New session 24 of user ceph-admin.
Jan 10 11:57:48 np0005580781 systemd[1]: Started Session 24 of User ceph-admin.
Jan 10 11:57:48 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:49 np0005580781 systemd-logind[798]: New session 25 of user ceph-admin.
Jan 10 11:57:49 np0005580781 systemd[1]: Started Session 25 of User ceph-admin.
Jan 10 11:57:49 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 10 11:57:49 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 10 11:57:49 np0005580781 systemd-logind[798]: New session 26 of user ceph-admin.
Jan 10 11:57:49 np0005580781 systemd[1]: Started Session 26 of User ceph-admin.
Jan 10 11:57:49 np0005580781 systemd-logind[798]: New session 27 of user ceph-admin.
Jan 10 11:57:49 np0005580781 systemd[1]: Started Session 27 of User ceph-admin.
Jan 10 11:57:49 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:50 np0005580781 systemd-logind[798]: New session 28 of user ceph-admin.
Jan 10 11:57:50 np0005580781 systemd[1]: Started Session 28 of User ceph-admin.
Jan 10 11:57:50 np0005580781 ceph-mon[75249]: Deploying cephadm binary to compute-0
Jan 10 11:57:50 np0005580781 systemd-logind[798]: New session 29 of user ceph-admin.
Jan 10 11:57:50 np0005580781 systemd[1]: Started Session 29 of User ceph-admin.
Jan 10 11:57:50 np0005580781 systemd-logind[798]: New session 30 of user ceph-admin.
Jan 10 11:57:50 np0005580781 systemd[1]: Started Session 30 of User ceph-admin.
Jan 10 11:57:50 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:51 np0005580781 systemd-logind[798]: New session 31 of user ceph-admin.
Jan 10 11:57:51 np0005580781 systemd[1]: Started Session 31 of User ceph-admin.
Jan 10 11:57:51 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:52 np0005580781 systemd-logind[798]: New session 32 of user ceph-admin.
Jan 10 11:57:52 np0005580781 systemd[1]: Started Session 32 of User ceph-admin.
Jan 10 11:57:52 np0005580781 systemd-logind[798]: New session 33 of user ceph-admin.
Jan 10 11:57:52 np0005580781 systemd[1]: Started Session 33 of User ceph-admin.
Jan 10 11:57:52 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:57:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 11:57:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:53 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Added host compute-0
Jan 10 11:57:53 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 10 11:57:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 11:57:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 11:57:53 np0005580781 festive_neumann[76595]: Added host 'compute-0' with addr '192.168.122.100'
Jan 10 11:57:53 np0005580781 systemd[1]: libpod-2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0.scope: Deactivated successfully.
Jan 10 11:57:53 np0005580781 podman[76579]: 2026-01-10 16:57:53.727564367 +0000 UTC m=+6.552954375 container died 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 11:57:53 np0005580781 systemd[1]: var-lib-containers-storage-overlay-dfbe2abb034826463531483205d95b836ac34ae8e43e539816df68c133f4f023-merged.mount: Deactivated successfully.
Jan 10 11:57:53 np0005580781 podman[76579]: 2026-01-10 16:57:53.782933285 +0000 UTC m=+6.608323293 container remove 2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0 (image=quay.io/ceph/ceph:v20, name=festive_neumann, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 11:57:53 np0005580781 systemd[1]: libpod-conmon-2033c8e580bc9c2364e6cec5254f1ccffdc2abfdf19226fa902fd64fdfbc48a0.scope: Deactivated successfully.
Jan 10 11:57:53 np0005580781 podman[77026]: 2026-01-10 16:57:53.860259072 +0000 UTC m=+0.050540298 container create db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 11:57:53 np0005580781 systemd[1]: Started libpod-conmon-db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2.scope.
Jan 10 11:57:53 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:53 np0005580781 podman[77026]: 2026-01-10 16:57:53.83450708 +0000 UTC m=+0.024788336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:53 np0005580781 podman[77026]: 2026-01-10 16:57:53.95560545 +0000 UTC m=+0.145886776 container init db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 11:57:53 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:53 np0005580781 podman[77026]: 2026-01-10 16:57:53.968348157 +0000 UTC m=+0.158629383 container start db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:53 np0005580781 podman[77026]: 2026-01-10 16:57:53.972281264 +0000 UTC m=+0.162562500 container attach db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:57:54 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:54 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 10 11:57:54 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 10 11:57:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 11:57:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:54 np0005580781 elastic_feynman[77057]: Scheduled mon update...
Jan 10 11:57:54 np0005580781 podman[77026]: 2026-01-10 16:57:54.422654436 +0000 UTC m=+0.612935662 container died db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 11:57:54 np0005580781 systemd[1]: libpod-db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2.scope: Deactivated successfully.
Jan 10 11:57:54 np0005580781 systemd[1]: var-lib-containers-storage-overlay-63ccc6cadbacf05fe314fb440394a4f88e043bea9e7b66dad9e1f6d1052f8f60-merged.mount: Deactivated successfully.
Jan 10 11:57:54 np0005580781 podman[77026]: 2026-01-10 16:57:54.473805269 +0000 UTC m=+0.664086495 container remove db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2 (image=quay.io/ceph/ceph:v20, name=elastic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:54 np0005580781 systemd[1]: libpod-conmon-db415988d8c8468502dc5e9848d3f3424ccc75699662227c01261013310874c2.scope: Deactivated successfully.
Jan 10 11:57:54 np0005580781 podman[77120]: 2026-01-10 16:57:54.551564698 +0000 UTC m=+0.046569910 container create ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 11:57:54 np0005580781 systemd[1]: Started libpod-conmon-ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a.scope.
Jan 10 11:57:54 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:54 np0005580781 podman[77120]: 2026-01-10 16:57:54.530887354 +0000 UTC m=+0.025892586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:54 np0005580781 podman[77120]: 2026-01-10 16:57:54.634673562 +0000 UTC m=+0.129678804 container init ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 11:57:54 np0005580781 podman[77120]: 2026-01-10 16:57:54.641597481 +0000 UTC m=+0.136602693 container start ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 11:57:54 np0005580781 podman[77120]: 2026-01-10 16:57:54.647042519 +0000 UTC m=+0.142047731 container attach ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:54 np0005580781 podman[77077]: 2026-01-10 16:57:54.667859817 +0000 UTC m=+0.560172494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:54 np0005580781 ceph-mon[75249]: Added host compute-0
Jan 10 11:57:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:54 np0005580781 podman[77156]: 2026-01-10 16:57:54.813861965 +0000 UTC m=+0.069988188 container create e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 11:57:54 np0005580781 systemd[1]: Started libpod-conmon-e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a.scope.
Jan 10 11:57:54 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:54 np0005580781 podman[77156]: 2026-01-10 16:57:54.78689605 +0000 UTC m=+0.043022313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:54 np0005580781 podman[77156]: 2026-01-10 16:57:54.889936337 +0000 UTC m=+0.146062570 container init e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 10 11:57:54 np0005580781 podman[77156]: 2026-01-10 16:57:54.89443986 +0000 UTC m=+0.150566073 container start e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:54 np0005580781 podman[77156]: 2026-01-10 16:57:54.898539962 +0000 UTC m=+0.154666195 container attach e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:54 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:55 np0005580781 admiring_cartwright[77191]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 10 11:57:55 np0005580781 systemd[1]: libpod-e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a.scope: Deactivated successfully.
Jan 10 11:57:55 np0005580781 podman[77156]: 2026-01-10 16:57:55.015094758 +0000 UTC m=+0.271221011 container died e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 11:57:55 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c19efb4e5eaf5b29d1899115cdfa4cb4af4ef76b9b1376514bd5351357a32c3a-merged.mount: Deactivated successfully.
Jan 10 11:57:55 np0005580781 podman[77156]: 2026-01-10 16:57:55.062250282 +0000 UTC m=+0.318376495 container remove e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a (image=quay.io/ceph/ceph:v20, name=admiring_cartwright, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 11:57:55 np0005580781 systemd[1]: libpod-conmon-e1f97356295bcda32753361d46f88e149ea3c6ecf538f8c96e68461374d4477a.scope: Deactivated successfully.
Jan 10 11:57:55 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:55 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 10 11:57:55 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:55 np0005580781 vigilant_feistel[77137]: Scheduled mgr update...
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 10 11:57:55 np0005580781 systemd[1]: libpod-ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a.scope: Deactivated successfully.
Jan 10 11:57:55 np0005580781 podman[77120]: 2026-01-10 16:57:55.126495713 +0000 UTC m=+0.621500925 container died ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:55 np0005580781 systemd[1]: var-lib-containers-storage-overlay-cdadede5dce6c8f517acde5bac57308151872d4450a9197b4a2d6af387481813-merged.mount: Deactivated successfully.
Jan 10 11:57:55 np0005580781 podman[77120]: 2026-01-10 16:57:55.168255341 +0000 UTC m=+0.663260553 container remove ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a (image=quay.io/ceph/ceph:v20, name=vigilant_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 11:57:55 np0005580781 systemd[1]: libpod-conmon-ec5261574d183ca2b9b69691740b7d4243f0067afb803b840879dfaab540f94a.scope: Deactivated successfully.
Jan 10 11:57:55 np0005580781 podman[77236]: 2026-01-10 16:57:55.239566974 +0000 UTC m=+0.047467025 container create b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 11:57:55 np0005580781 systemd[1]: Started libpod-conmon-b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84.scope.
Jan 10 11:57:55 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:55 np0005580781 podman[77236]: 2026-01-10 16:57:55.300193366 +0000 UTC m=+0.108093417 container init b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 11:57:55 np0005580781 podman[77236]: 2026-01-10 16:57:55.305738857 +0000 UTC m=+0.113638908 container start b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 11:57:55 np0005580781 podman[77236]: 2026-01-10 16:57:55.309636643 +0000 UTC m=+0.117536694 container attach b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 11:57:55 np0005580781 podman[77236]: 2026-01-10 16:57:55.221322947 +0000 UTC m=+0.029223018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:55 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:55 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service crash spec with placement *
Jan 10 11:57:55 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 10 11:57:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:55 np0005580781 priceless_hawking[77290]: Scheduled crash update...
Jan 10 11:57:55 np0005580781 systemd[1]: libpod-b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84.scope: Deactivated successfully.
Jan 10 11:57:55 np0005580781 podman[77236]: 2026-01-10 16:57:55.816611886 +0000 UTC m=+0.624511957 container died b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 11:57:55 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2aedd641c74edfccecfb5c8d96f8dbf9e00ad5c0b8caa47799481f37e53d9e77-merged.mount: Deactivated successfully.
Jan 10 11:57:55 np0005580781 ceph-mgr[75538]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 10 11:57:56 np0005580781 podman[77236]: 2026-01-10 16:57:56.003299663 +0000 UTC m=+0.811199714 container remove b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84 (image=quay.io/ceph/ceph:v20, name=priceless_hawking, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:56 np0005580781 systemd[1]: libpod-conmon-b6d8d74465cbd7b30ecb9d16a88c5d6aa4a07a04aa3822b95a067486cd358d84.scope: Deactivated successfully.
Jan 10 11:57:56 np0005580781 podman[77407]: 2026-01-10 16:57:56.064046348 +0000 UTC m=+0.040737251 container create e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:57:56 np0005580781 systemd[1]: Started libpod-conmon-e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858.scope.
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: Saving service mon spec with placement count:5
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: Saving service mgr spec with placement count:2
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:56 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:56 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:56 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:56 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:56 np0005580781 podman[77407]: 2026-01-10 16:57:56.140594114 +0000 UTC m=+0.117285127 container init e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:56 np0005580781 podman[77407]: 2026-01-10 16:57:56.047812226 +0000 UTC m=+0.024503149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:56 np0005580781 podman[77407]: 2026-01-10 16:57:56.149390853 +0000 UTC m=+0.126081776 container start e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:56 np0005580781 podman[77407]: 2026-01-10 16:57:56.153521766 +0000 UTC m=+0.130212719 container attach e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:56 np0005580781 podman[77461]: 2026-01-10 16:57:56.214521358 +0000 UTC m=+0.055399881 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:56 np0005580781 podman[77461]: 2026-01-10 16:57:56.340801349 +0000 UTC m=+0.181679922 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 10 11:57:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2764466694' entity='client.admin' 
Jan 10 11:57:56 np0005580781 systemd[1]: libpod-e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858.scope: Deactivated successfully.
Jan 10 11:57:56 np0005580781 podman[77407]: 2026-01-10 16:57:56.848279736 +0000 UTC m=+0.824970649 container died e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:56 np0005580781 systemd[1]: var-lib-containers-storage-overlay-049f090f9e140bccd89dacb39b43e642ed06cb896f2dacf12278635b8820cbb8-merged.mount: Deactivated successfully.
Jan 10 11:57:56 np0005580781 podman[77407]: 2026-01-10 16:57:56.881567913 +0000 UTC m=+0.858258816 container remove e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858 (image=quay.io/ceph/ceph:v20, name=angry_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 11:57:56 np0005580781 systemd[1]: libpod-conmon-e74c89da720c13e578235980929c59004e5deb9ae7ec4832e713fc74d61aa858.scope: Deactivated successfully.
Jan 10 11:57:56 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:56 np0005580781 podman[77623]: 2026-01-10 16:57:56.962006523 +0000 UTC m=+0.058071032 container create 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:57 np0005580781 systemd[1]: Started libpod-conmon-2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1.scope.
Jan 10 11:57:57 np0005580781 podman[77623]: 2026-01-10 16:57:56.937809364 +0000 UTC m=+0.033873893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:57 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:57 np0005580781 podman[77623]: 2026-01-10 16:57:57.057403092 +0000 UTC m=+0.153467601 container init 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 11:57:57 np0005580781 podman[77623]: 2026-01-10 16:57:57.063375635 +0000 UTC m=+0.159440124 container start 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:57 np0005580781 podman[77623]: 2026-01-10 16:57:57.067196679 +0000 UTC m=+0.163261188 container attach 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:57 np0005580781 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77657 (sysctl)
Jan 10 11:57:57 np0005580781 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: Saving service crash spec with placement *
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2764466694' entity='client.admin' 
Jan 10 11:57:57 np0005580781 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 10 11:57:57 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:57 np0005580781 systemd[1]: libpod-2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1.scope: Deactivated successfully.
Jan 10 11:57:57 np0005580781 podman[77623]: 2026-01-10 16:57:57.498381278 +0000 UTC m=+0.594445767 container died 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 11:57:57 np0005580781 systemd[1]: var-lib-containers-storage-overlay-58f4d07da10d6f4c9bc518394e2016b276e3e37c5264cd3ff31fd6edd16a69e5-merged.mount: Deactivated successfully.
Jan 10 11:57:57 np0005580781 podman[77623]: 2026-01-10 16:57:57.600281834 +0000 UTC m=+0.696346323 container remove 2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1 (image=quay.io/ceph/ceph:v20, name=clever_cerf, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:57:57 np0005580781 systemd[1]: libpod-conmon-2ad6d78ba4e33065d8fbdce5bd6d5c87913fa5e8e37dad97de6a798af00e67a1.scope: Deactivated successfully.
Jan 10 11:57:57 np0005580781 podman[77763]: 2026-01-10 16:57:57.677096517 +0000 UTC m=+0.050149187 container create e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:57 np0005580781 systemd[1]: Started libpod-conmon-e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b.scope.
Jan 10 11:57:57 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:57 np0005580781 podman[77763]: 2026-01-10 16:57:57.653279848 +0000 UTC m=+0.026332498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:57 np0005580781 podman[77763]: 2026-01-10 16:57:57.766207345 +0000 UTC m=+0.139260015 container init e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 11:57:57 np0005580781 podman[77763]: 2026-01-10 16:57:57.772905848 +0000 UTC m=+0.145958518 container start e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 10 11:57:57 np0005580781 podman[77763]: 2026-01-10 16:57:57.777623156 +0000 UTC m=+0.150675876 container attach e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:57 np0005580781 ceph-mgr[75538]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 10 11:57:57 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:57:57 np0005580781 ceph-mon[75249]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 10 11:57:58 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:57:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 11:57:58 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:58 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Added label _admin to host compute-0
Jan 10 11:57:58 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 10 11:57:58 np0005580781 cool_shockley[77780]: Added label _admin to host compute-0
Jan 10 11:57:58 np0005580781 systemd[1]: libpod-e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b.scope: Deactivated successfully.
Jan 10 11:57:58 np0005580781 podman[77763]: 2026-01-10 16:57:58.222641721 +0000 UTC m=+0.595694351 container died e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 11:57:58 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0101285f28cbd0e84daf6a3829cc28e3629d2a366ae2d817e27761cdc0a3f476-merged.mount: Deactivated successfully.
Jan 10 11:57:58 np0005580781 podman[77763]: 2026-01-10 16:57:58.275048239 +0000 UTC m=+0.648100869 container remove e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b (image=quay.io/ceph/ceph:v20, name=cool_shockley, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 11:57:58 np0005580781 systemd[1]: libpod-conmon-e6454d67ea178efc94a4cbbff267a68509330ddc36ad313fdfacd92b391b3e4b.scope: Deactivated successfully.
Jan 10 11:57:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:57:58 np0005580781 podman[77898]: 2026-01-10 16:57:58.357917647 +0000 UTC m=+0.044557965 container create a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 11:57:58 np0005580781 podman[77894]: 2026-01-10 16:57:58.362552523 +0000 UTC m=+0.052870101 container create 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 11:57:58 np0005580781 systemd[1]: Started libpod-conmon-a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394.scope.
Jan 10 11:57:58 np0005580781 systemd[1]: Started libpod-conmon-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope.
Jan 10 11:57:58 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:58 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:58 np0005580781 podman[77898]: 2026-01-10 16:57:58.335493406 +0000 UTC m=+0.022133764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:57:58 np0005580781 podman[77894]: 2026-01-10 16:57:58.34295734 +0000 UTC m=+0.033274938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:57:58 np0005580781 podman[77894]: 2026-01-10 16:57:58.439438298 +0000 UTC m=+0.129755916 container init 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 11:57:58 np0005580781 podman[77898]: 2026-01-10 16:57:58.455499826 +0000 UTC m=+0.142140174 container init a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:58 np0005580781 podman[77894]: 2026-01-10 16:57:58.456674608 +0000 UTC m=+0.146992186 container start 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:57:58 np0005580781 podman[77894]: 2026-01-10 16:57:58.460115482 +0000 UTC m=+0.150433340 container attach 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:57:58 np0005580781 podman[77898]: 2026-01-10 16:57:58.467189345 +0000 UTC m=+0.153829663 container start a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 11:57:58 np0005580781 elastic_elgamal[77928]: 167 167
Jan 10 11:57:58 np0005580781 systemd[1]: libpod-a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394.scope: Deactivated successfully.
Jan 10 11:57:58 np0005580781 podman[77898]: 2026-01-10 16:57:58.47619492 +0000 UTC m=+0.162835278 container attach a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 10 11:57:58 np0005580781 podman[77898]: 2026-01-10 16:57:58.476850068 +0000 UTC m=+0.163490466 container died a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:57:58 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:58 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:58 np0005580781 ceph-mon[75249]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 10 11:57:58 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:57:58 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2bf2ddd64f891b3a062f505591811670ecd188dce8e6382467e04d24bf517a15-merged.mount: Deactivated successfully.
Jan 10 11:57:58 np0005580781 podman[77898]: 2026-01-10 16:57:58.526557212 +0000 UTC m=+0.213197510 container remove a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 11:57:58 np0005580781 systemd[1]: libpod-conmon-a803d1e4fafaf150174476959f9ea5b202dd2d6d6880370e5713df8f5951a394.scope: Deactivated successfully.
Jan 10 11:57:58 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:57:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 10 11:57:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/976546190' entity='client.admin' 
Jan 10 11:57:59 np0005580781 zen_bardeen[77930]: set mgr/dashboard/cluster/status
Jan 10 11:57:59 np0005580781 systemd[1]: libpod-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope: Deactivated successfully.
Jan 10 11:57:59 np0005580781 conmon[77930]: conmon 5d47000b14f360478494 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope/container/memory.events
Jan 10 11:57:59 np0005580781 podman[77894]: 2026-01-10 16:57:59.051265019 +0000 UTC m=+0.741582607 container died 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 11:57:59 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a8dba5f43d80ca03ed1e0f4ecf074625b24c3652b6ae10b3d791391e369e96da-merged.mount: Deactivated successfully.
Jan 10 11:57:59 np0005580781 podman[77894]: 2026-01-10 16:57:59.103278486 +0000 UTC m=+0.793596094 container remove 5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe (image=quay.io/ceph/ceph:v20, name=zen_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:59 np0005580781 systemd[1]: libpod-conmon-5d47000b14f360478494c3bbbc433246cb0ca5f2e834426ad480968b4994b4fe.scope: Deactivated successfully.
Jan 10 11:57:59 np0005580781 systemd[1]: Reloading.
Jan 10 11:57:59 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:57:59 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:57:59 np0005580781 podman[78031]: 2026-01-10 16:57:59.673283647 +0000 UTC m=+0.064275943 container create decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:59 np0005580781 systemd[1]: Started libpod-conmon-decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117.scope.
Jan 10 11:57:59 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:57:59 np0005580781 podman[78031]: 2026-01-10 16:57:59.65287154 +0000 UTC m=+0.043863846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:57:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:57:59 np0005580781 podman[78031]: 2026-01-10 16:57:59.762465577 +0000 UTC m=+0.153457923 container init decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:57:59 np0005580781 podman[78031]: 2026-01-10 16:57:59.777409974 +0000 UTC m=+0.168402280 container start decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 11:57:59 np0005580781 podman[78031]: 2026-01-10 16:57:59.78240848 +0000 UTC m=+0.173400776 container attach decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:57:59 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:00 np0005580781 python3[78077]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: Added label _admin to host compute-0
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/976546190' entity='client.admin' 
Jan 10 11:58:00 np0005580781 podman[78080]: 2026-01-10 16:58:00.095901102 +0000 UTC m=+0.066563285 container create 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:00 np0005580781 systemd[1]: Started libpod-conmon-982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c.scope.
Jan 10 11:58:00 np0005580781 podman[78080]: 2026-01-10 16:58:00.067274322 +0000 UTC m=+0.037936555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:00 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c485ee4affd3c4234fcd33ef289e37da4b5b14e5906a877535c275b87b2bb7e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c485ee4affd3c4234fcd33ef289e37da4b5b14e5906a877535c275b87b2bb7e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:00 np0005580781 podman[78080]: 2026-01-10 16:58:00.21657738 +0000 UTC m=+0.187239603 container init 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:00 np0005580781 podman[78080]: 2026-01-10 16:58:00.227192259 +0000 UTC m=+0.197854442 container start 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:00 np0005580781 podman[78080]: 2026-01-10 16:58:00.232032311 +0000 UTC m=+0.202694494 container attach 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]: [
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:    {
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "available": false,
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "being_replaced": false,
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "ceph_device_lvm": false,
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "lsm_data": {},
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "lvs": [],
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "path": "/dev/sr0",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "rejected_reasons": [
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "Insufficient space (<5GB)",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "Has a FileSystem"
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        ],
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        "sys_api": {
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "actuators": null,
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "device_nodes": [
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:                "sr0"
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            ],
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "devname": "sr0",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "human_readable_size": "482.00 KB",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "id_bus": "ata",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "model": "QEMU DVD-ROM",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "nr_requests": "2",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "parent": "/dev/sr0",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "partitions": {},
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "path": "/dev/sr0",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "removable": "1",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "rev": "2.5+",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "ro": "0",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "rotational": "1",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "sas_address": "",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "sas_device_handle": "",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "scheduler_mode": "mq-deadline",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "sectors": 0,
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "sectorsize": "2048",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "size": 493568.0,
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "support_discard": "2048",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "type": "disk",
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:            "vendor": "QEMU"
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:        }
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]:    }
Jan 10 11:58:00 np0005580781 youthful_beaver[78047]: ]
Jan 10 11:58:00 np0005580781 systemd[1]: libpod-decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117.scope: Deactivated successfully.
Jan 10 11:58:00 np0005580781 podman[78031]: 2026-01-10 16:58:00.399544695 +0000 UTC m=+0.790536961 container died decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:00 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e56b48da4556d049c22aa6b7f89ed22271a3e960de134680610d975d7e3b878f-merged.mount: Deactivated successfully.
Jan 10 11:58:00 np0005580781 podman[78031]: 2026-01-10 16:58:00.44303401 +0000 UTC m=+0.834026266 container remove decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 11:58:00 np0005580781 systemd[1]: libpod-conmon-decb3bd8b81703687aaec0707a0af07295d28ed4af6bd702abd83134f2d0e117.scope: Deactivated successfully.
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:00 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 10 11:58:00 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 10 11:58:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/156437027' entity='client.admin' 
Jan 10 11:58:00 np0005580781 systemd[1]: libpod-982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c.scope: Deactivated successfully.
Jan 10 11:58:00 np0005580781 podman[78080]: 2026-01-10 16:58:00.675322298 +0000 UTC m=+0.645984441 container died 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:00 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c485ee4affd3c4234fcd33ef289e37da4b5b14e5906a877535c275b87b2bb7e5-merged.mount: Deactivated successfully.
Jan 10 11:58:00 np0005580781 podman[78080]: 2026-01-10 16:58:00.71615231 +0000 UTC m=+0.686814453 container remove 982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c (image=quay.io/ceph/ceph:v20, name=distracted_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:00 np0005580781 systemd[1]: libpod-conmon-982292a70c7fb708b3e050e4992236e4f8606fe63932f5e2b030cc11e016886c.scope: Deactivated successfully.
Jan 10 11:58:00 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:01 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/156437027' entity='client.admin' 
Jan 10 11:58:01 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf
Jan 10 11:58:01 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf
Jan 10 11:58:01 np0005580781 ansible-async_wrapper.py[79355]: Invoked with j905361730444 30 /home/zuul/.ansible/tmp/ansible-tmp-1768064281.0961525-36469-151021259633936/AnsiballZ_command.py _
Jan 10 11:58:01 np0005580781 ansible-async_wrapper.py[79408]: Starting module and watcher
Jan 10 11:58:01 np0005580781 ansible-async_wrapper.py[79408]: Start watching 79409 (30)
Jan 10 11:58:01 np0005580781 ansible-async_wrapper.py[79409]: Start module (79409)
Jan 10 11:58:01 np0005580781 ansible-async_wrapper.py[79355]: Return async_wrapper task started.
Jan 10 11:58:01 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 10 11:58:01 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 10 11:58:01 np0005580781 python3[79410]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:01 np0005580781 podman[79463]: 2026-01-10 16:58:01.958877931 +0000 UTC m=+0.056888371 container create 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:01 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:02 np0005580781 systemd[1]: Started libpod-conmon-2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32.scope.
Jan 10 11:58:02 np0005580781 podman[79463]: 2026-01-10 16:58:01.941623821 +0000 UTC m=+0.039634281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:02 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:02 np0005580781 ceph-mon[75249]: Updating compute-0:/etc/ceph/ceph.conf
Jan 10 11:58:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75a81e994dfab45937bdb14287ac762e601c69f30e962ead116fbfdb0c48125/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75a81e994dfab45937bdb14287ac762e601c69f30e962ead116fbfdb0c48125/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:02 np0005580781 podman[79463]: 2026-01-10 16:58:02.060646154 +0000 UTC m=+0.158656594 container init 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 11:58:02 np0005580781 podman[79463]: 2026-01-10 16:58:02.068077266 +0000 UTC m=+0.166087696 container start 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 11:58:02 np0005580781 podman[79463]: 2026-01-10 16:58:02.07115616 +0000 UTC m=+0.169166690 container attach 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:02 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring
Jan 10 11:58:02 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring
Jan 10 11:58:02 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 11:58:02 np0005580781 zen_bartik[79528]: 
Jan 10 11:58:02 np0005580781 zen_bartik[79528]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 11:58:02 np0005580781 systemd[1]: libpod-2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32.scope: Deactivated successfully.
Jan 10 11:58:02 np0005580781 podman[79463]: 2026-01-10 16:58:02.527870124 +0000 UTC m=+0.625880594 container died 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:58:02 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:02 np0005580781 systemd[1]: var-lib-containers-storage-overlay-d75a81e994dfab45937bdb14287ac762e601c69f30e962ead116fbfdb0c48125-merged.mount: Deactivated successfully.
Jan 10 11:58:02 np0005580781 podman[79463]: 2026-01-10 16:58:02.991896367 +0000 UTC m=+1.089906807 container remove 2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 11:58:03 np0005580781 systemd[1]: libpod-conmon-2a8db026d63140e8ef5fbb9868a0afc9076e57f6f5c27b2bd11b2f1b31bddf32.scope: Deactivated successfully.
Jan 10 11:58:03 np0005580781 ansible-async_wrapper.py[79409]: Module complete (79409)
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.conf
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:03 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev 43d00670-8115-4c9e-9c2a-f2473001d570 (Updating crash deployment (+1 -> 1))
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:03 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 10 11:58:03 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 10 11:58:03 np0005580781 python3[79959]: ansible-ansible.legacy.async_status Invoked with jid=j905361730444.79355 mode=status _async_dir=/root/.ansible_async
Jan 10 11:58:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:03 np0005580781 python3[80058]: ansible-ansible.legacy.async_status Invoked with jid=j905361730444.79355 mode=cleanup _async_dir=/root/.ansible_async
Jan 10 11:58:03 np0005580781 podman[80100]: 2026-01-10 16:58:03.622744096 +0000 UTC m=+0.045652885 container create f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:58:03 np0005580781 systemd[1]: Started libpod-conmon-f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b.scope.
Jan 10 11:58:03 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:03 np0005580781 podman[80100]: 2026-01-10 16:58:03.696622918 +0000 UTC m=+0.119531747 container init f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 11:58:03 np0005580781 podman[80100]: 2026-01-10 16:58:03.601572619 +0000 UTC m=+0.024481418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:03 np0005580781 podman[80100]: 2026-01-10 16:58:03.706142298 +0000 UTC m=+0.129051087 container start f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:58:03 np0005580781 infallible_bardeen[80116]: 167 167
Jan 10 11:58:03 np0005580781 podman[80100]: 2026-01-10 16:58:03.710189018 +0000 UTC m=+0.133097827 container attach f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:03 np0005580781 systemd[1]: libpod-f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b.scope: Deactivated successfully.
Jan 10 11:58:03 np0005580781 podman[80100]: 2026-01-10 16:58:03.711942986 +0000 UTC m=+0.134851775 container died f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:03 np0005580781 systemd[1]: var-lib-containers-storage-overlay-efe4af7f4251a8e81aee2a2db7546ed8e9551996150cddc555ec7f112f011f5b-merged.mount: Deactivated successfully.
Jan 10 11:58:03 np0005580781 podman[80100]: 2026-01-10 16:58:03.753345034 +0000 UTC m=+0.176253813 container remove f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_bardeen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 11:58:03 np0005580781 systemd[1]: libpod-conmon-f23d2598105e22661590b50655da2cefd9ac17463bc54583ac72b1764cfba32b.scope: Deactivated successfully.
Jan 10 11:58:03 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:03 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:03 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:03 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: Updating compute-0:/var/lib/ceph/a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/config/ceph.client.admin.keyring
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: Deploying daemon crash.compute-0 on compute-0
Jan 10 11:58:04 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:04 np0005580781 python3[80196]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 10 11:58:04 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:04 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:04 np0005580781 systemd[1]: Starting Ceph crash.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:58:04 np0005580781 podman[80284]: 2026-01-10 16:58:04.61974698 +0000 UTC m=+0.046249432 container create 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:04 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:04 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:04 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:04 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cdc464ffe110a3335ba20779adb496e9ec9f34882212aacc4603e9a6759d6ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:04 np0005580781 podman[80284]: 2026-01-10 16:58:04.601147453 +0000 UTC m=+0.027649915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:04 np0005580781 podman[80284]: 2026-01-10 16:58:04.698633199 +0000 UTC m=+0.125135671 container init 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:04 np0005580781 podman[80284]: 2026-01-10 16:58:04.711567271 +0000 UTC m=+0.138069713 container start 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 11:58:04 np0005580781 bash[80284]: 2d8e6ffc82d6a2078d3f81ea77434ec395ce6aa2d60412fa836e9aac21c1f66d
Jan 10 11:58:04 np0005580781 systemd[1]: Started Ceph crash.compute-0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:04 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev 43d00670-8115-4c9e-9c2a-f2473001d570 (Updating crash deployment (+1 -> 1))
Jan 10 11:58:04 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event 43d00670-8115-4c9e-9c2a-f2473001d570 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:04 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev e0f3e90c-1385-4e85-960d-7157f9f85130 (Updating mgr deployment (+1 -> 2))
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr services"} : dispatch
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:04 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.ipbphh on compute-0
Jan 10 11:58:04 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.ipbphh on compute-0
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.876+0000 7fc2d4549640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.876+0000 7fc2d4549640 -1 AuthRegistry(0x7fc2cc053640) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.878+0000 7fc2d4549640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.878+0000 7fc2d4549640 -1 AuthRegistry(0x7fc2d4547fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.881+0000 7fc2d22be640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: 2026-01-10T16:58:04.881+0000 7fc2d4549640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 10 11:58:04 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-crash-compute-0[80299]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 10 11:58:04 np0005580781 python3[80331]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:04 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:05 np0005580781 podman[80374]: 2026-01-10 16:58:05.036214777 +0000 UTC m=+0.073008940 container create 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 11:58:05 np0005580781 systemd[1]: Started libpod-conmon-2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f.scope.
Jan 10 11:58:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 11:58:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ipbphh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 10 11:58:05 np0005580781 podman[80374]: 2026-01-10 16:58:05.014480135 +0000 UTC m=+0.051274308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:05 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:05 np0005580781 podman[80374]: 2026-01-10 16:58:05.135461741 +0000 UTC m=+0.172255934 container init 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:05 np0005580781 podman[80374]: 2026-01-10 16:58:05.144804386 +0000 UTC m=+0.181598559 container start 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 11:58:05 np0005580781 podman[80374]: 2026-01-10 16:58:05.147875989 +0000 UTC m=+0.184670162 container attach 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:05 np0005580781 podman[80470]: 2026-01-10 16:58:05.451755529 +0000 UTC m=+0.030858122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:05 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 11:58:05 np0005580781 focused_shamir[80408]: 
Jan 10 11:58:05 np0005580781 focused_shamir[80408]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 11:58:05 np0005580781 systemd[1]: libpod-2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f.scope: Deactivated successfully.
Jan 10 11:58:05 np0005580781 podman[80470]: 2026-01-10 16:58:05.642367233 +0000 UTC m=+0.221469806 container create cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:05 np0005580781 podman[80374]: 2026-01-10 16:58:05.643665518 +0000 UTC m=+0.680459681 container died 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 11:58:05 np0005580781 systemd[1]: var-lib-containers-storage-overlay-46ddc3e3c03967c1d3f744a45e154bc01095f1243f721ff0dc3388ec56ba78b6-merged.mount: Deactivated successfully.
Jan 10 11:58:05 np0005580781 podman[80374]: 2026-01-10 16:58:05.753765228 +0000 UTC m=+0.790559431 container remove 2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f (image=quay.io/ceph/ceph:v20, name=focused_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 11:58:05 np0005580781 systemd[1]: libpod-conmon-2c2c3a1b02a8814970fac440653a73338185b7a2bc7af6d058ec88ceecc5884f.scope: Deactivated successfully.
Jan 10 11:58:05 np0005580781 systemd[1]: Started libpod-conmon-cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3.scope.
Jan 10 11:58:05 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:05 np0005580781 podman[80470]: 2026-01-10 16:58:05.834858107 +0000 UTC m=+0.413960710 container init cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:05 np0005580781 podman[80470]: 2026-01-10 16:58:05.841393425 +0000 UTC m=+0.420495998 container start cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:05 np0005580781 intelligent_goldberg[80498]: 167 167
Jan 10 11:58:05 np0005580781 systemd[1]: libpod-cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3.scope: Deactivated successfully.
Jan 10 11:58:05 np0005580781 podman[80470]: 2026-01-10 16:58:05.85513747 +0000 UTC m=+0.434240213 container attach cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 11:58:05 np0005580781 podman[80470]: 2026-01-10 16:58:05.85735352 +0000 UTC m=+0.436456163 container died cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 11:58:05 np0005580781 systemd[1]: var-lib-containers-storage-overlay-903f7a70873eec468a7c2fb856683b482df1ba16ee0c7af0663c0f236514830d-merged.mount: Deactivated successfully.
Jan 10 11:58:05 np0005580781 podman[80470]: 2026-01-10 16:58:05.904203727 +0000 UTC m=+0.483306300 container remove cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:05 np0005580781 systemd[1]: libpod-conmon-cde994ef162baf371371b446b7e69206dc066638e3d45a44b7f8845416bfdee3.scope: Deactivated successfully.
Jan 10 11:58:05 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:05 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:06 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:06 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:06 np0005580781 ceph-mon[75249]: Deploying daemon mgr.compute-0.ipbphh on compute-0
Jan 10 11:58:06 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:06 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:06 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:06 np0005580781 python3[80578]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:06 np0005580781 podman[80617]: 2026-01-10 16:58:06.486377149 +0000 UTC m=+0.068261541 container create 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 11:58:06 np0005580781 systemd[1]: Started libpod-conmon-673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa.scope.
Jan 10 11:58:06 np0005580781 systemd[1]: Starting Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:58:06 np0005580781 podman[80617]: 2026-01-10 16:58:06.46253926 +0000 UTC m=+0.044423672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:06 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:06 np0005580781 podman[80617]: 2026-01-10 16:58:06.581767498 +0000 UTC m=+0.163651880 container init 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:06 np0005580781 podman[80617]: 2026-01-10 16:58:06.589656503 +0000 UTC m=+0.171540875 container start 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:06 np0005580781 podman[80617]: 2026-01-10 16:58:06.593750715 +0000 UTC m=+0.175635087 container attach 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 11:58:06 np0005580781 ansible-async_wrapper.py[79408]: Done in kid B.
Jan 10 11:58:06 np0005580781 podman[80705]: 2026-01-10 16:58:06.81817762 +0000 UTC m=+0.043491066 container create 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 11:58:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106/merged/var/lib/ceph/mgr/ceph-compute-0.ipbphh supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:06 np0005580781 podman[80705]: 2026-01-10 16:58:06.884152777 +0000 UTC m=+0.109466243 container init 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:06 np0005580781 podman[80705]: 2026-01-10 16:58:06.89011644 +0000 UTC m=+0.115429886 container start 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 11:58:06 np0005580781 bash[80705]: 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e
Jan 10 11:58:06 np0005580781 podman[80705]: 2026-01-10 16:58:06.798171444 +0000 UTC m=+0.023484910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:06 np0005580781 systemd[1]: Started Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:58:06 np0005580781 ceph-mgr[80724]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:58:06 np0005580781 ceph-mgr[80724]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 10 11:58:06 np0005580781 ceph-mgr[80724]: pidfile_write: ignore empty --pid-file
Jan 10 11:58:06 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:06 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'alerts'
Jan 10 11:58:06 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:06 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:06 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:07 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev e0f3e90c-1385-4e85-960d-7157f9f85130 (Updating mgr deployment (+1 -> 2))
Jan 10 11:58:07 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event e0f3e90c-1385-4e85-960d-7157f9f85130 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4073465749' entity='client.admin' 
Jan 10 11:58:07 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'balancer'
Jan 10 11:58:07 np0005580781 systemd[1]: libpod-673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa.scope: Deactivated successfully.
Jan 10 11:58:07 np0005580781 podman[80617]: 2026-01-10 16:58:07.115308685 +0000 UTC m=+0.697193067 container died 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:07 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/4073465749' entity='client.admin' 
Jan 10 11:58:07 np0005580781 systemd[1]: var-lib-containers-storage-overlay-837be2422d1541b8bea93fa32d3040d5f44772e121cf190b77dcd9da8410d32f-merged.mount: Deactivated successfully.
Jan 10 11:58:07 np0005580781 podman[80617]: 2026-01-10 16:58:07.16061926 +0000 UTC m=+0.742503652 container remove 673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa (image=quay.io/ceph/ceph:v20, name=eloquent_ishizaka, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 11:58:07 np0005580781 systemd[1]: libpod-conmon-673b0017245bd6177b8663d4c225107a377d777320d7af042669283e0623b4aa.scope: Deactivated successfully.
Jan 10 11:58:07 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'cephadm'
Jan 10 11:58:07 np0005580781 python3[80858]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:07 np0005580781 podman[80874]: 2026-01-10 16:58:07.54599065 +0000 UTC m=+0.051241167 container create bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 11:58:07 np0005580781 systemd[1]: Started libpod-conmon-bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67.scope.
Jan 10 11:58:07 np0005580781 podman[80874]: 2026-01-10 16:58:07.523076276 +0000 UTC m=+0.028326793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:07 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:07 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:07 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:07 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:07 np0005580781 podman[80874]: 2026-01-10 16:58:07.658213007 +0000 UTC m=+0.163463524 container init bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 11:58:07 np0005580781 podman[80874]: 2026-01-10 16:58:07.665736432 +0000 UTC m=+0.170986929 container start bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 11:58:07 np0005580781 podman[80874]: 2026-01-10 16:58:07.690016253 +0000 UTC m=+0.195266750 container attach bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:07 np0005580781 podman[80919]: 2026-01-10 16:58:07.745905036 +0000 UTC m=+0.127273428 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:07 np0005580781 podman[80919]: 2026-01-10 16:58:07.840327159 +0000 UTC m=+0.221695541 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 11:58:07 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:08 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'crash'
Jan 10 11:58:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 10 11:58:08 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'dashboard'
Jan 10 11:58:08 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4080358249' entity='client.admin' 
Jan 10 11:58:08 np0005580781 systemd[1]: libpod-bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67.scope: Deactivated successfully.
Jan 10 11:58:08 np0005580781 podman[80874]: 2026-01-10 16:58:08.234845508 +0000 UTC m=+0.740096035 container died bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:08 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3ae94549b8c98c5945f76f626806d09b40cf21ee56f91556fe9564f1c49395b7-merged.mount: Deactivated successfully.
Jan 10 11:58:08 np0005580781 podman[80874]: 2026-01-10 16:58:08.678049914 +0000 UTC m=+1.183300421 container remove bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67 (image=quay.io/ceph/ceph:v20, name=priceless_fermi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:08 np0005580781 systemd[1]: libpod-conmon-bb574e897007afe5680b2bc61b31077d7026dd2bb22e4e7474d975f130ac5e67.scope: Deactivated successfully.
Jan 10 11:58:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:08 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'devicehealth'
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [progress INFO root] Writing back 2 completed events
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:58:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:58:09 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'diskprediction_local'
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 11:58:09 np0005580781 python3[81095]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:09 np0005580781 podman[81096]: 2026-01-10 16:58:09.173350139 +0000 UTC m=+0.106209905 container create 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/4080358249' entity='client.admin' 
Jan 10 11:58:09 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh[80720]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 10 11:58:09 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh[80720]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 10 11:58:09 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh[80720]:  from numpy import show_config as show_numpy_config
Jan 10 11:58:09 np0005580781 podman[81096]: 2026-01-10 16:58:09.104931955 +0000 UTC m=+0.037791721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:09 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'influx'
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:58:09 np0005580781 systemd[1]: Started libpod-conmon-5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d.scope.
Jan 10 11:58:09 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:09 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:09 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:09 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:09 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'insights'
Jan 10 11:58:09 np0005580781 podman[81096]: 2026-01-10 16:58:09.323267254 +0000 UTC m=+0.256126990 container init 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 11:58:09 np0005580781 podman[81096]: 2026-01-10 16:58:09.331168429 +0000 UTC m=+0.264028155 container start 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:09 np0005580781 podman[81096]: 2026-01-10 16:58:09.336474974 +0000 UTC m=+0.269334890 container attach 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:09 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 10 11:58:09 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:09 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 10 11:58:09 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 10 11:58:09 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'iostat'
Jan 10 11:58:09 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'k8sevents'
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 10 11:58:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 10 11:58:09 np0005580781 podman[81227]: 2026-01-10 16:58:09.865661712 +0000 UTC m=+0.068070395 container create 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 10 11:58:09 np0005580781 systemd[1]: Started libpod-conmon-6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3.scope.
Jan 10 11:58:09 np0005580781 podman[81227]: 2026-01-10 16:58:09.838313877 +0000 UTC m=+0.040722620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:09 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:09 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:09 np0005580781 podman[81227]: 2026-01-10 16:58:09.982744502 +0000 UTC m=+0.185153255 container init 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:09 np0005580781 podman[81227]: 2026-01-10 16:58:09.994003509 +0000 UTC m=+0.196412212 container start 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:10 np0005580781 podman[81227]: 2026-01-10 16:58:09.998338847 +0000 UTC m=+0.200747520 container attach 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:10 np0005580781 practical_elion[81244]: 167 167
Jan 10 11:58:10 np0005580781 systemd[1]: libpod-6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3.scope: Deactivated successfully.
Jan 10 11:58:10 np0005580781 podman[81227]: 2026-01-10 16:58:10.007052235 +0000 UTC m=+0.209460908 container died 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:10 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'localpool'
Jan 10 11:58:10 np0005580781 systemd[1]: var-lib-containers-storage-overlay-982cab76a062523e164fed239fb13a5a1bdcd63c4b8ae5c6712ffa1d553d5641-merged.mount: Deactivated successfully.
Jan 10 11:58:10 np0005580781 podman[81227]: 2026-01-10 16:58:10.052659897 +0000 UTC m=+0.255068560 container remove 6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3 (image=quay.io/ceph/ceph:v20, name=practical_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 11:58:10 np0005580781 systemd[1]: libpod-conmon-6c3ad1d802d679e8f47c05a87c9f2712c1d280afa09a5846aa0aa83aca08bdf3.scope: Deactivated successfully.
Jan 10 11:58:10 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'mds_autoscaler'
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mkxlpr (unknown last config time)...
Jan 10 11:58:10 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mkxlpr (unknown last config time)...
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mkxlpr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mkxlpr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mgr services"} : dispatch
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:10 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mkxlpr on compute-0
Jan 10 11:58:10 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mkxlpr on compute-0
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mkxlpr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 10 11:58:10 np0005580781 happy_heyrovsky[81111]: set require_min_compat_client to mimic
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 10 11:58:10 np0005580781 systemd[1]: libpod-5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d.scope: Deactivated successfully.
Jan 10 11:58:10 np0005580781 podman[81096]: 2026-01-10 16:58:10.35040081 +0000 UTC m=+1.283260576 container died 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 11:58:10 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'mirroring'
Jan 10 11:58:10 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3b87239b09842a15b121c69feb57cd6aa476480b29f57ea7f3dfdeb721efc09b-merged.mount: Deactivated successfully.
Jan 10 11:58:10 np0005580781 podman[81096]: 2026-01-10 16:58:10.40766305 +0000 UTC m=+1.340522776 container remove 5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d (image=quay.io/ceph/ceph:v20, name=happy_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 11:58:10 np0005580781 systemd[1]: libpod-conmon-5078c7575c64fa9de24a58a06b11fcfcef93f92d8e799154fe7fbe732acbb19d.scope: Deactivated successfully.
Jan 10 11:58:10 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'nfs'
Jan 10 11:58:10 np0005580781 podman[81341]: 2026-01-10 16:58:10.642636862 +0000 UTC m=+0.044139474 container create ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 11:58:10 np0005580781 systemd[1]: Started libpod-conmon-ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7.scope.
Jan 10 11:58:10 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:10 np0005580781 podman[81341]: 2026-01-10 16:58:10.622349369 +0000 UTC m=+0.023851981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:10 np0005580781 podman[81341]: 2026-01-10 16:58:10.726110336 +0000 UTC m=+0.127612978 container init ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 11:58:10 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'orchestrator'
Jan 10 11:58:10 np0005580781 podman[81341]: 2026-01-10 16:58:10.735471772 +0000 UTC m=+0.136974414 container start ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 11:58:10 np0005580781 podman[81341]: 2026-01-10 16:58:10.73981525 +0000 UTC m=+0.141317942 container attach ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:10 np0005580781 mystifying_hodgkin[81356]: 167 167
Jan 10 11:58:10 np0005580781 systemd[1]: libpod-ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7.scope: Deactivated successfully.
Jan 10 11:58:10 np0005580781 podman[81341]: 2026-01-10 16:58:10.741745942 +0000 UTC m=+0.143248554 container died ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:10 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1e8a0218286aeed4dc01e6d0cbd86b96c4f9eb945cdae28544bfad470a752502-merged.mount: Deactivated successfully.
Jan 10 11:58:10 np0005580781 podman[81341]: 2026-01-10 16:58:10.783970373 +0000 UTC m=+0.185472995 container remove ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7 (image=quay.io/ceph/ceph:v20, name=mystifying_hodgkin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:10 np0005580781 systemd[1]: libpod-conmon-ffca4a86fb41e9f874457a5cab9373ef65cfcedbe39acff4a97500867c688fb7.scope: Deactivated successfully.
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:10 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:11 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'osd_perf_query'
Jan 10 11:58:11 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'osd_support'
Jan 10 11:58:11 np0005580781 python3[81427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:11 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'pg_autoscaler'
Jan 10 11:58:11 np0005580781 podman[81449]: 2026-01-10 16:58:11.202170497 +0000 UTC m=+0.049956652 container create 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:11 np0005580781 systemd[1]: Started libpod-conmon-499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb.scope.
Jan 10 11:58:11 np0005580781 podman[81449]: 2026-01-10 16:58:11.17362222 +0000 UTC m=+0.021408375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:11 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'progress'
Jan 10 11:58:11 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:11 np0005580781 ceph-mon[75249]: Reconfiguring mgr.compute-0.mkxlpr (unknown last config time)...
Jan 10 11:58:11 np0005580781 ceph-mon[75249]: Reconfiguring daemon mgr.compute-0.mkxlpr on compute-0
Jan 10 11:58:11 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2258513117' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 10 11:58:11 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:11 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:11 np0005580781 podman[81449]: 2026-01-10 16:58:11.333417133 +0000 UTC m=+0.181203298 container init 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 10 11:58:11 np0005580781 podman[81449]: 2026-01-10 16:58:11.342008867 +0000 UTC m=+0.189795012 container start 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 11:58:11 np0005580781 podman[81449]: 2026-01-10 16:58:11.362743342 +0000 UTC m=+0.210529507 container attach 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:11 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'prometheus'
Jan 10 11:58:11 np0005580781 podman[81527]: 2026-01-10 16:58:11.647881731 +0000 UTC m=+0.065906067 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 11:58:11 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:58:11 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'rbd_support'
Jan 10 11:58:11 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:11 np0005580781 podman[81527]: 2026-01-10 16:58:11.973310188 +0000 UTC m=+0.391334514 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:12 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'rgw'
Jan 10 11:58:12 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'rook'
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Added host compute-0
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 stupefied_visvesvaraya[81463]: Added host 'compute-0' with addr '192.168.122.100'
Jan 10 11:58:12 np0005580781 stupefied_visvesvaraya[81463]: Scheduled mon update...
Jan 10 11:58:12 np0005580781 stupefied_visvesvaraya[81463]: Scheduled mgr update...
Jan 10 11:58:12 np0005580781 stupefied_visvesvaraya[81463]: Scheduled osd.default_drive_group update...
Jan 10 11:58:12 np0005580781 systemd[1]: libpod-499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb.scope: Deactivated successfully.
Jan 10 11:58:12 np0005580781 podman[81449]: 2026-01-10 16:58:12.529492322 +0000 UTC m=+1.377278477 container died 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ec208ef7c071c5e23c8be96041225a1ddc4728d1bd271cb334902b0f100d43c3-merged.mount: Deactivated successfully.
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:12 np0005580781 podman[81449]: 2026-01-10 16:58:12.594882004 +0000 UTC m=+1.442668149 container remove 499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb (image=quay.io/ceph/ceph:v20, name=stupefied_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:58:12 np0005580781 systemd[1]: libpod-conmon-499bb7b6bab80c142a72640f8dd14d8846f53e6880d431fd3f1545f6532742cb.scope: Deactivated successfully.
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 10 11:58:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev 1b329ec7-f94d-4acf-bc23-ab98af1723ab (Updating mgr deployment (-1 -> 1))
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.ipbphh from compute-0 -- ports [8765]
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.ipbphh from compute-0 -- ports [8765]
Jan 10 11:58:12 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:12 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'selftest'
Jan 10 11:58:13 np0005580781 ceph-mgr[80724]: mgr[py] Loading python module 'smb'
Jan 10 11:58:13 np0005580781 systemd[1]: Stopping Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:58:13 np0005580781 python3[81795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:13 np0005580781 podman[81821]: 2026-01-10 16:58:13.187169072 +0000 UTC m=+0.074524902 container create 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 11:58:13 np0005580781 systemd[1]: Started libpod-conmon-287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb.scope.
Jan 10 11:58:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:13 np0005580781 podman[81821]: 2026-01-10 16:58:13.152180579 +0000 UTC m=+0.039536429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:13 np0005580781 podman[81821]: 2026-01-10 16:58:13.258409743 +0000 UTC m=+0.145765623 container init 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:13 np0005580781 podman[81821]: 2026-01-10 16:58:13.265683911 +0000 UTC m=+0.153039741 container start 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:13 np0005580781 podman[81821]: 2026-01-10 16:58:13.269206297 +0000 UTC m=+0.156562157 container attach 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:13 np0005580781 podman[81854]: 2026-01-10 16:58:13.313072162 +0000 UTC m=+0.095839112 container died 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 11:58:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1e8879709b2c97c9bd9b874415f21c463d79238876163c808652871dd0218106-merged.mount: Deactivated successfully.
Jan 10 11:58:13 np0005580781 podman[81854]: 2026-01-10 16:58:13.365614054 +0000 UTC m=+0.148381004 container remove 04d4013852c39c056183895fcaf6bf9efdade85b93515959452caf8dabcdfe7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 11:58:13 np0005580781 bash[81854]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-ipbphh
Jan 10 11:58:13 np0005580781 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.ipbphh.service: Main process exited, code=exited, status=143/n/a
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: Added host compute-0
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: Saving service mon spec with placement compute-0
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: Saving service mgr spec with placement compute-0
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: Saving service osd.default_drive_group spec with placement compute-0
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: Removing daemon mgr.compute-0.ipbphh from compute-0 -- ports [8765]
Jan 10 11:58:13 np0005580781 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.ipbphh.service: Failed with result 'exit-code'.
Jan 10 11:58:13 np0005580781 systemd[1]: Stopped Ceph mgr.compute-0.ipbphh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:58:13 np0005580781 systemd[1]: ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.ipbphh.service: Consumed 7.238s CPU time, 382.6M memory peak, read 0B from disk, written 560.0K to disk.
Jan 10 11:58:13 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:13 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:13 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285699077' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 11:58:13 np0005580781 wizardly_proskuriakova[81860]: 
Jan 10 11:58:13 np0005580781 wizardly_proskuriakova[81860]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":55,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-10T16:57:15:771836+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-10T16:57:15.774565+0000","services":{}},"progress_events":{}}
Jan 10 11:58:13 np0005580781 podman[81821]: 2026-01-10 16:58:13.77213392 +0000 UTC m=+0.659489760 container died 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:13 np0005580781 systemd[1]: libpod-287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb.scope: Deactivated successfully.
Jan 10 11:58:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-654fef234f5b1ac56231a02c2efde53fd70e5b64263c35b34d0aad81dae2c077-merged.mount: Deactivated successfully.
Jan 10 11:58:13 np0005580781 podman[81821]: 2026-01-10 16:58:13.870420758 +0000 UTC m=+0.757776638 container remove 287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb (image=quay.io/ceph/ceph:v20, name=wizardly_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:13 np0005580781 systemd[1]: libpod-conmon-287424092b5c1a893b2c2924b54702f937c4d4bb1cacb9de8565e4161d4365cb.scope: Deactivated successfully.
Jan 10 11:58:13 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.ipbphh
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"} v 0)
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"} : dispatch
Jan 10 11:58:13 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.ipbphh
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"}]': finished
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 11:58:13 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev 1b329ec7-f94d-4acf-bc23-ab98af1723ab (Updating mgr deployment (-1 -> 1))
Jan 10 11:58:13 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event 1b329ec7-f94d-4acf-bc23-ab98af1723ab (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:14 np0005580781 ceph-mgr[75538]: [progress INFO root] Writing back 3 completed events
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: Removing key for mgr.compute-0.ipbphh
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"} : dispatch
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ipbphh"}]': finished
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:58:14 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:14 np0005580781 podman[82053]: 2026-01-10 16:58:14.549891081 +0000 UTC m=+0.060043447 container create ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:14 np0005580781 systemd[1]: Started libpod-conmon-ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094.scope.
Jan 10 11:58:14 np0005580781 podman[82053]: 2026-01-10 16:58:14.529031862 +0000 UTC m=+0.039184238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:14 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:14 np0005580781 podman[82053]: 2026-01-10 16:58:14.649891425 +0000 UTC m=+0.160043831 container init ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:14 np0005580781 podman[82053]: 2026-01-10 16:58:14.672862431 +0000 UTC m=+0.183014787 container start ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:14 np0005580781 podman[82053]: 2026-01-10 16:58:14.677839456 +0000 UTC m=+0.187991852 container attach ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:14 np0005580781 brave_chatterjee[82070]: 167 167
Jan 10 11:58:14 np0005580781 systemd[1]: libpod-ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094.scope: Deactivated successfully.
Jan 10 11:58:14 np0005580781 podman[82053]: 2026-01-10 16:58:14.681116646 +0000 UTC m=+0.191269022 container died ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:14 np0005580781 systemd[1]: var-lib-containers-storage-overlay-68e03338b6bef4f7aa780abfcda19b90c43d1ea27a77f84d639d4521ac2ea22b-merged.mount: Deactivated successfully.
Jan 10 11:58:14 np0005580781 podman[82053]: 2026-01-10 16:58:14.739028094 +0000 UTC m=+0.249180460 container remove ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:14 np0005580781 systemd[1]: libpod-conmon-ac02d479cf61f7dfb6eebf90bc842fbeb034338401e56e87712ea65cab186094.scope: Deactivated successfully.
Jan 10 11:58:14 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:14 np0005580781 podman[82095]: 2026-01-10 16:58:14.966181402 +0000 UTC m=+0.059756449 container create b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 11:58:15 np0005580781 systemd[1]: Started libpod-conmon-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope.
Jan 10 11:58:15 np0005580781 podman[82095]: 2026-01-10 16:58:14.939642769 +0000 UTC m=+0.033217866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:15 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:15 np0005580781 podman[82095]: 2026-01-10 16:58:15.062914657 +0000 UTC m=+0.156489704 container init b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 11:58:15 np0005580781 podman[82095]: 2026-01-10 16:58:15.077506495 +0000 UTC m=+0.171081542 container start b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:15 np0005580781 podman[82095]: 2026-01-10 16:58:15.08063766 +0000 UTC m=+0.174212717 container attach b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 11:58:15 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:15 np0005580781 silly_hertz[82112]: --> passed data devices: 0 physical, 3 LVM
Jan 10 11:58:15 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:16 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:16 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9aa1dcc9-88f4-49c0-be40-744313964d3e
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"} v 0)
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"} : dispatch
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"}]': finished
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:16 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:16 np0005580781 silly_hertz[82112]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 10 11:58:16 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 10 11:58:16 np0005580781 lvm[82204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:58:16 np0005580781 lvm[82204]: VG ceph_vg0 finished
Jan 10 11:58:16 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 11:58:16 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:16 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 10 11:58:16 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"} : dispatch
Jan 10 11:58:16 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2337355461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9aa1dcc9-88f4-49c0-be40-744313964d3e"}]': finished
Jan 10 11:58:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 10 11:58:17 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890078762' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 11:58:17 np0005580781 silly_hertz[82112]: stderr: got monmap epoch 1
Jan 10 11:58:17 np0005580781 silly_hertz[82112]: --> Creating keyring file for osd.0
Jan 10 11:58:17 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 10 11:58:17 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 10 11:58:17 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 9aa1dcc9-88f4-49c0-be40-744313964d3e --setuser ceph --setgroup ceph
Jan 10 11:58:17 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:17 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 10 11:58:17 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 10 11:58:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: stderr: 2026-01-10T16:58:17.499+0000 7f2c8dc7b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: stderr: 2026-01-10T16:58:17.525+0000 7f2c8dc7b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:18 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e8e31518-65ae-476c-891c-e2fc550d0a1c
Jan 10 11:58:18 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:18 np0005580781 ceph-mon[75249]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 10 11:58:18 np0005580781 ceph-mon[75249]: Cluster is now healthy
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"} v 0)
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"} : dispatch
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"}]': finished
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:19 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:19 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:19 np0005580781 lvm[83155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:19 np0005580781 lvm[83155]: VG ceph_vg1 finished
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 10 11:58:19 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566005166' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: stderr: got monmap epoch 1
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: --> Creating keyring file for osd.1
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 10 11:58:19 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid e8e31518-65ae-476c-891c-e2fc550d0a1c --setuser ceph --setgroup ceph
Jan 10 11:58:19 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:20 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"} : dispatch
Jan 10 11:58:20 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3731380388' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e8e31518-65ae-476c-891c-e2fc550d0a1c"}]': finished
Jan 10 11:58:20 np0005580781 silly_hertz[82112]: stderr: 2026-01-10T16:58:19.916+0000 7f5caeeea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 10 11:58:20 np0005580781 silly_hertz[82112]: stderr: 2026-01-10T16:58:19.940+0000 7f5caeeea8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 10 11:58:20 np0005580781 silly_hertz[82112]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 10 11:58:20 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 11:58:20 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 10 11:58:20 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 87473727-6468-4f68-8371-e0bf60edaa43
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"} v 0)
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"} : dispatch
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"}]': finished
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:21 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:21 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:21 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:21 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:21 np0005580781 lvm[84101]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:58:21 np0005580781 lvm[84101]: VG ceph_vg2 finished
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:21 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 10 11:58:21 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:22 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"} : dispatch
Jan 10 11:58:22 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/4144688744' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "87473727-6468-4f68-8371-e0bf60edaa43"}]': finished
Jan 10 11:58:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 10 11:58:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745133082' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 10 11:58:22 np0005580781 silly_hertz[82112]: stderr: got monmap epoch 1
Jan 10 11:58:22 np0005580781 silly_hertz[82112]: --> Creating keyring file for osd.2
Jan 10 11:58:22 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 10 11:58:22 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 10 11:58:22 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 87473727-6468-4f68-8371-e0bf60edaa43 --setuser ceph --setgroup ceph
Jan 10 11:58:22 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: stderr: 2026-01-10T16:58:22.439+0000 7fb3b6dec8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: stderr: 2026-01-10T16:58:22.455+0000 7fb3b6dec8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 11:58:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 10 11:58:23 np0005580781 silly_hertz[82112]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 10 11:58:23 np0005580781 systemd[1]: libpod-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope: Deactivated successfully.
Jan 10 11:58:23 np0005580781 systemd[1]: libpod-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope: Consumed 6.799s CPU time.
Jan 10 11:58:23 np0005580781 podman[82095]: 2026-01-10 16:58:23.447297481 +0000 UTC m=+8.540872528 container died b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:23 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3ef725e03d39c6f7f34eb122e0ee68caada870e0e46d23ee46344dfa586fa862-merged.mount: Deactivated successfully.
Jan 10 11:58:23 np0005580781 podman[82095]: 2026-01-10 16:58:23.5446869 +0000 UTC m=+8.638261937 container remove b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_hertz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 11:58:23 np0005580781 systemd[1]: libpod-conmon-b23cbc9334d1024541c159f2eb3d8a951cd784aaaf9676103deace2b82480c36.scope: Deactivated successfully.
Jan 10 11:58:23 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:24 np0005580781 podman[85099]: 2026-01-10 16:58:24.02086021 +0000 UTC m=+0.043201805 container create e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:24 np0005580781 systemd[1]: Started libpod-conmon-e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3.scope.
Jan 10 11:58:24 np0005580781 podman[85099]: 2026-01-10 16:58:24.001876502 +0000 UTC m=+0.024218117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:24 np0005580781 podman[85099]: 2026-01-10 16:58:24.112076044 +0000 UTC m=+0.134417719 container init e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 11:58:24 np0005580781 podman[85099]: 2026-01-10 16:58:24.118254369 +0000 UTC m=+0.140595964 container start e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 11:58:24 np0005580781 podman[85099]: 2026-01-10 16:58:24.121578174 +0000 UTC m=+0.143919849 container attach e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 11:58:24 np0005580781 eloquent_wilbur[85116]: 167 167
Jan 10 11:58:24 np0005580781 systemd[1]: libpod-e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3.scope: Deactivated successfully.
Jan 10 11:58:24 np0005580781 podman[85099]: 2026-01-10 16:58:24.123318933 +0000 UTC m=+0.145660528 container died e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 11:58:24 np0005580781 systemd[1]: var-lib-containers-storage-overlay-fb32be3cf9188cb1c63c4a3fc06f9688b1b4de92a6dd022f96a7a935dd1c1608-merged.mount: Deactivated successfully.
Jan 10 11:58:24 np0005580781 podman[85099]: 2026-01-10 16:58:24.163242824 +0000 UTC m=+0.185584459 container remove e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:24 np0005580781 systemd[1]: libpod-conmon-e2add8eb7e49bf9405fe5a662681e05c6f62018a70253b993afb7e7013f23be3.scope: Deactivated successfully.
Jan 10 11:58:24 np0005580781 podman[85141]: 2026-01-10 16:58:24.361734247 +0000 UTC m=+0.055125002 container create a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 11:58:24 np0005580781 systemd[1]: Started libpod-conmon-a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a.scope.
Jan 10 11:58:24 np0005580781 podman[85141]: 2026-01-10 16:58:24.334160256 +0000 UTC m=+0.027551071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:24 np0005580781 podman[85141]: 2026-01-10 16:58:24.457653555 +0000 UTC m=+0.151044300 container init a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 11:58:24 np0005580781 podman[85141]: 2026-01-10 16:58:24.465481087 +0000 UTC m=+0.158871792 container start a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:24 np0005580781 podman[85141]: 2026-01-10 16:58:24.469629514 +0000 UTC m=+0.163020229 container attach a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:24 np0005580781 cool_jang[85158]: {
Jan 10 11:58:24 np0005580781 cool_jang[85158]:    "0": [
Jan 10 11:58:24 np0005580781 cool_jang[85158]:        {
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "devices": [
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "/dev/loop3"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            ],
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_name": "ceph_lv0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_size": "21470642176",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "name": "ceph_lv0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "tags": {
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.crush_device_class": "",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.encrypted": "0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osd_id": "0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.type": "block",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.vdo": "0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.with_tpm": "0"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            },
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "type": "block",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "vg_name": "ceph_vg0"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:        }
Jan 10 11:58:24 np0005580781 cool_jang[85158]:    ],
Jan 10 11:58:24 np0005580781 cool_jang[85158]:    "1": [
Jan 10 11:58:24 np0005580781 cool_jang[85158]:        {
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "devices": [
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "/dev/loop4"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            ],
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_name": "ceph_lv1",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_size": "21470642176",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "name": "ceph_lv1",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "tags": {
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.crush_device_class": "",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.encrypted": "0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osd_id": "1",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.type": "block",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.vdo": "0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.with_tpm": "0"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            },
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "type": "block",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "vg_name": "ceph_vg1"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:        }
Jan 10 11:58:24 np0005580781 cool_jang[85158]:    ],
Jan 10 11:58:24 np0005580781 cool_jang[85158]:    "2": [
Jan 10 11:58:24 np0005580781 cool_jang[85158]:        {
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "devices": [
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "/dev/loop5"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            ],
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_name": "ceph_lv2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_size": "21470642176",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "name": "ceph_lv2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "tags": {
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.crush_device_class": "",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.encrypted": "0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osd_id": "2",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.type": "block",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.vdo": "0",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:                "ceph.with_tpm": "0"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            },
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "type": "block",
Jan 10 11:58:24 np0005580781 cool_jang[85158]:            "vg_name": "ceph_vg2"
Jan 10 11:58:24 np0005580781 cool_jang[85158]:        }
Jan 10 11:58:24 np0005580781 cool_jang[85158]:    ]
Jan 10 11:58:24 np0005580781 cool_jang[85158]: }
Jan 10 11:58:24 np0005580781 systemd[1]: libpod-a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a.scope: Deactivated successfully.
Jan 10 11:58:24 np0005580781 podman[85141]: 2026-01-10 16:58:24.774377838 +0000 UTC m=+0.467768563 container died a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:24 np0005580781 systemd[1]: var-lib-containers-storage-overlay-f3b4d5321570074e251458578aa749bca589b4e6201b26456e97d5cbe6f36d2c-merged.mount: Deactivated successfully.
Jan 10 11:58:24 np0005580781 podman[85141]: 2026-01-10 16:58:24.812306922 +0000 UTC m=+0.505697637 container remove a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_jang, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:58:24 np0005580781 systemd[1]: libpod-conmon-a49857944294bedcdf451ff1f2e9c3157393c3f2c0befe1509e1d4717304209a.scope: Deactivated successfully.
Jan 10 11:58:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 10 11:58:24 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 10 11:58:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:24 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:24 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 10 11:58:24 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 10 11:58:24 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 10 11:58:25 np0005580781 podman[85267]: 2026-01-10 16:58:25.346231519 +0000 UTC m=+0.037114973 container create efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:25 np0005580781 systemd[1]: Started libpod-conmon-efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300.scope.
Jan 10 11:58:25 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:25 np0005580781 podman[85267]: 2026-01-10 16:58:25.416347885 +0000 UTC m=+0.107231369 container init efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:25 np0005580781 podman[85267]: 2026-01-10 16:58:25.421773439 +0000 UTC m=+0.112656893 container start efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:25 np0005580781 podman[85267]: 2026-01-10 16:58:25.425235487 +0000 UTC m=+0.116118971 container attach efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:25 np0005580781 stupefied_wilson[85283]: 167 167
Jan 10 11:58:25 np0005580781 podman[85267]: 2026-01-10 16:58:25.42675738 +0000 UTC m=+0.117640904 container died efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:25 np0005580781 systemd[1]: libpod-efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300.scope: Deactivated successfully.
Jan 10 11:58:25 np0005580781 podman[85267]: 2026-01-10 16:58:25.330442711 +0000 UTC m=+0.021326185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:25 np0005580781 systemd[1]: var-lib-containers-storage-overlay-fc3261ae95df1658f703649134ed7f7a85969e388b2cda06ffb74aa938f1379c-merged.mount: Deactivated successfully.
Jan 10 11:58:25 np0005580781 podman[85267]: 2026-01-10 16:58:25.470824489 +0000 UTC m=+0.161707933 container remove efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wilson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 11:58:25 np0005580781 systemd[1]: libpod-conmon-efe52c8c59a1f02c1c5b06a182d8cd4f58acdfce2346ed5837a55ab82486c300.scope: Deactivated successfully.
Jan 10 11:58:25 np0005580781 podman[85314]: 2026-01-10 16:58:25.712010561 +0000 UTC m=+0.058162819 container create 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 11:58:25 np0005580781 systemd[1]: Started libpod-conmon-1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5.scope.
Jan 10 11:58:25 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:25 np0005580781 podman[85314]: 2026-01-10 16:58:25.69044657 +0000 UTC m=+0.036598828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:25 np0005580781 podman[85314]: 2026-01-10 16:58:25.815652667 +0000 UTC m=+0.161804925 container init 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:25 np0005580781 podman[85314]: 2026-01-10 16:58:25.823803858 +0000 UTC m=+0.169956086 container start 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 11:58:25 np0005580781 podman[85314]: 2026-01-10 16:58:25.827192904 +0000 UTC m=+0.173345162 container attach 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:25 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:26 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test[85330]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 10 11:58:26 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test[85330]:                            [--no-systemd] [--no-tmpfs]
Jan 10 11:58:26 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test[85330]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 10 11:58:26 np0005580781 systemd[1]: libpod-1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5.scope: Deactivated successfully.
Jan 10 11:58:26 np0005580781 podman[85314]: 2026-01-10 16:58:26.034819976 +0000 UTC m=+0.380972204 container died 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:26 np0005580781 systemd[1]: var-lib-containers-storage-overlay-58585fae57886a343f733fbddcf823d5c60125e28c5c595ecdd3af390dceca18-merged.mount: Deactivated successfully.
Jan 10 11:58:26 np0005580781 ceph-mon[75249]: Deploying daemon osd.0 on compute-0
Jan 10 11:58:26 np0005580781 podman[85314]: 2026-01-10 16:58:26.277613854 +0000 UTC m=+0.623766092 container remove 1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 11:58:26 np0005580781 systemd[1]: libpod-conmon-1de9864f0c899d14568b9a800491a2c46d04cb83865e34fe7b39dcdf79f3f5f5.scope: Deactivated successfully.
Jan 10 11:58:26 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:26 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:26 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:26 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:26 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:26 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:26 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:27 np0005580781 systemd[1]: Starting Ceph osd.0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:58:27 np0005580781 podman[85489]: 2026-01-10 16:58:27.406928019 +0000 UTC m=+0.041532578 container create f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 11:58:27 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:27 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:27 np0005580781 podman[85489]: 2026-01-10 16:58:27.385964015 +0000 UTC m=+0.020568564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:27 np0005580781 podman[85489]: 2026-01-10 16:58:27.49273791 +0000 UTC m=+0.127342479 container init f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:58:27 np0005580781 podman[85489]: 2026-01-10 16:58:27.500787508 +0000 UTC m=+0.135392027 container start f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 11:58:27 np0005580781 podman[85489]: 2026-01-10 16:58:27.504270587 +0000 UTC m=+0.138875126 container attach f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:27 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:27 np0005580781 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:27 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:27 np0005580781 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:27 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:28 np0005580781 lvm[85590]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:28 np0005580781 lvm[85590]: VG ceph_vg1 finished
Jan 10 11:58:28 np0005580781 lvm[85589]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:58:28 np0005580781 lvm[85589]: VG ceph_vg0 finished
Jan 10 11:58:28 np0005580781 lvm[85592]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:58:28 np0005580781 lvm[85592]: VG ceph_vg2 finished
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:28 np0005580781 bash[85489]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 11:58:28 np0005580781 bash[85489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 10 11:58:28 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate[85504]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 10 11:58:28 np0005580781 bash[85489]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 10 11:58:28 np0005580781 systemd[1]: libpod-f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc.scope: Deactivated successfully.
Jan 10 11:58:28 np0005580781 systemd[1]: libpod-f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc.scope: Consumed 1.709s CPU time.
Jan 10 11:58:28 np0005580781 podman[85687]: 2026-01-10 16:58:28.779324047 +0000 UTC m=+0.047641348 container died f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:28 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c2cb7314cc5eeb5188aa2469443f220fa45cf959fde2fc58d537338c53cb281b-merged.mount: Deactivated successfully.
Jan 10 11:58:28 np0005580781 podman[85687]: 2026-01-10 16:58:28.837440455 +0000 UTC m=+0.105757686 container remove f159b6d2b3b622662674d8f4f3b0ccc3e7a2d2ed8ebb1bd8b9315c90d87799dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:28 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:29 np0005580781 podman[85745]: 2026-01-10 16:58:29.076986355 +0000 UTC m=+0.056744640 container create 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 11:58:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57f8c71121aa7c0f7143a63eac2ebce6be919029a57e8d90f0966a7775b4160/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:29 np0005580781 podman[85745]: 2026-01-10 16:58:29.048368298 +0000 UTC m=+0.028126623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:29 np0005580781 podman[85745]: 2026-01-10 16:58:29.154221355 +0000 UTC m=+0.133979690 container init 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 11:58:29 np0005580781 podman[85745]: 2026-01-10 16:58:29.159600231 +0000 UTC m=+0.139358516 container start 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 11:58:29 np0005580781 bash[85745]: 8bba0bcac67d61e0603c28f652cb0c35b79bb52b692367cba939aa36e7ec12fd
Jan 10 11:58:29 np0005580781 systemd[1]: Started Ceph osd.0 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: pidfile_write: ignore empty --pid-file
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:29 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 10 11:58:29 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84400 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd84000 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: load: jerasure load: lrc 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2dd85c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount shared_bdev_used = 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Git sha 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: DB SUMMARY
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: DB Session ID:  0RH7XH576014Q9A9FBIT
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                     Options.env: 0x560f2dc15ea0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                Options.info_log: 0x560f2ec668a0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.write_buffer_manager: 0x560f2dc7ab40
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.row_cache: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                              Options.wal_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.wal_compression: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_background_jobs: 4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Compression algorithms supported:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kZSTD supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc19a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc19a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc19a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 43bbcf8f-3aee-403e-8276-7bdc1e6e65cd
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309631931, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309634066, "job": 1, "event": "recovery_finished"}
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: freelist init
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: freelist _read_cfg
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs umount
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bdev(0x560f2ea1b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluefs mount shared_bdev_used = 27262976
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Git sha 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: DB SUMMARY
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: DB Session ID:  0RH7XH576014Q9A9FBIS
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                     Options.env: 0x560f2ee36a80
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                Options.info_log: 0x560f2ec66960
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.write_buffer_manager: 0x560f2dc7ab40
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.row_cache: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                              Options.wal_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.wal_compression: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_background_jobs: 4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Compression algorithms supported:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kZSTD supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec66bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc198d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec670c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc19a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec670c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc19a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f2ec670c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f2dc19a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 43bbcf8f-3aee-403e-8276-7bdc1e6e65cd
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309690347, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309696097, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064309, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "43bbcf8f-3aee-403e-8276-7bdc1e6e65cd", "db_session_id": "0RH7XH576014Q9A9FBIS", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309699305, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064309, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "43bbcf8f-3aee-403e-8276-7bdc1e6e65cd", "db_session_id": "0RH7XH576014Q9A9FBIS", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309703096, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064309, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "43bbcf8f-3aee-403e-8276-7bdc1e6e65cd", "db_session_id": "0RH7XH576014Q9A9FBIS", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064309704577, "job": 1, "event": "recovery_finished"}
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560f2ee80000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: DB pointer 0x560f2ee20000
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 460.80 MB usag
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: _get_class not permitted to load lua
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: _get_class not permitted to load sdk
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0 0 load_pgs
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0 0 load_pgs opened 0 pgs
Jan 10 11:58:29 np0005580781 ceph-osd[85764]: osd.0 0 log_to_monitors true
Jan 10 11:58:29 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0[85760]: 2026-01-10T16:58:29.746+0000 7f051afd28c0 -1 osd.0 0 log_to_monitors true
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 10 11:58:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 10 11:58:29 np0005580781 podman[86301]: 2026-01-10 16:58:29.863499805 +0000 UTC m=+0.034864849 container create df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 11:58:29 np0005580781 systemd[1]: Started libpod-conmon-df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb.scope.
Jan 10 11:58:29 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:29 np0005580781 podman[86301]: 2026-01-10 16:58:29.849098979 +0000 UTC m=+0.020464043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:29 np0005580781 podman[86301]: 2026-01-10 16:58:29.948430128 +0000 UTC m=+0.119795292 container init df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:29 np0005580781 podman[86301]: 2026-01-10 16:58:29.956520132 +0000 UTC m=+0.127885176 container start df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 11:58:29 np0005580781 podman[86301]: 2026-01-10 16:58:29.960149987 +0000 UTC m=+0.131515041 container attach df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:58:29 np0005580781 interesting_rhodes[86318]: 167 167
Jan 10 11:58:29 np0005580781 systemd[1]: libpod-df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb.scope: Deactivated successfully.
Jan 10 11:58:29 np0005580781 podman[86301]: 2026-01-10 16:58:29.966677795 +0000 UTC m=+0.138042849 container died df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:29 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:29 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b6899152b90c70e7977b647c3222fa75825ff72fbb26e44a029b85b5124de67a-merged.mount: Deactivated successfully.
Jan 10 11:58:30 np0005580781 podman[86301]: 2026-01-10 16:58:30.017565885 +0000 UTC m=+0.188930929 container remove df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:58:30 np0005580781 systemd[1]: libpod-conmon-df0cef2280bfdb8ef8e3fe18bce493202e2eb791e26e3267fbd92607766e3ebb.scope: Deactivated successfully.
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: Deploying daemon osd.1 on compute-0
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:30 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:30 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:30 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:30 np0005580781 podman[86347]: 2026-01-10 16:58:30.336171029 +0000 UTC m=+0.062989571 container create 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:30 np0005580781 systemd[1]: Started libpod-conmon-1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc.scope.
Jan 10 11:58:30 np0005580781 podman[86347]: 2026-01-10 16:58:30.313513944 +0000 UTC m=+0.040332486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:30 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:30 np0005580781 podman[86347]: 2026-01-10 16:58:30.44282936 +0000 UTC m=+0.169647902 container init 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:30 np0005580781 podman[86347]: 2026-01-10 16:58:30.458817762 +0000 UTC m=+0.185636284 container start 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 11:58:30 np0005580781 podman[86347]: 2026-01-10 16:58:30.463467826 +0000 UTC m=+0.190286448 container attach 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 11:58:30 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test[86363]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 10 11:58:30 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test[86363]:                            [--no-systemd] [--no-tmpfs]
Jan 10 11:58:30 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test[86363]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 10 11:58:30 np0005580781 systemd[1]: libpod-1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc.scope: Deactivated successfully.
Jan 10 11:58:30 np0005580781 podman[86347]: 2026-01-10 16:58:30.708378061 +0000 UTC m=+0.435196593 container died 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 11:58:30 np0005580781 systemd[1]: var-lib-containers-storage-overlay-37ef848a00c737ed432384bbf0859dda4c9816834951f5cc4915975b627d3b49-merged.mount: Deactivated successfully.
Jan 10 11:58:30 np0005580781 podman[86347]: 2026-01-10 16:58:30.766312014 +0000 UTC m=+0.493130516 container remove 1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:30 np0005580781 systemd[1]: libpod-conmon-1b9880b26aca710f2fdb475f2ea290d1885095cffdf030610542c939ee9528fc.scope: Deactivated successfully.
Jan 10 11:58:30 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 10 11:58:30 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 10 11:58:30 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:31 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:31 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:31 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 10 11:58:31 np0005580781 ceph-osd[85764]: osd.0 0 done with init, starting boot process
Jan 10 11:58:31 np0005580781 ceph-osd[85764]: osd.0 0 start_boot
Jan 10 11:58:31 np0005580781 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 10 11:58:31 np0005580781 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 10 11:58:31 np0005580781 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 10 11:58:31 np0005580781 ceph-osd[85764]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 10 11:58:31 np0005580781 ceph-osd[85764]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:31 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:31 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:31 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 11:58:31 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:31 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:31 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:31 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:31 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:31 np0005580781 systemd[1]: Starting Ceph osd.1 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:58:31 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:32 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 11:58:32 np0005580781 podman[86522]: 2026-01-10 16:58:32.251528428 +0000 UTC m=+0.030909983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:32 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:32 np0005580781 podman[86522]: 2026-01-10 16:58:32.746014123 +0000 UTC m=+0.525395568 container create eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:32 np0005580781 ceph-mon[75249]: from='osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 11:58:32 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:32 np0005580781 podman[86522]: 2026-01-10 16:58:32.849850362 +0000 UTC m=+0.629231837 container init eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:58:32 np0005580781 podman[86522]: 2026-01-10 16:58:32.858921684 +0000 UTC m=+0.638303129 container start eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 11:58:32 np0005580781 podman[86522]: 2026-01-10 16:58:32.877728617 +0000 UTC m=+0.657110062 container attach eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 11:58:32 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:33 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 11:58:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:33 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:33 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:33 np0005580781 lvm[86622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:58:33 np0005580781 lvm[86622]: VG ceph_vg0 finished
Jan 10 11:58:33 np0005580781 lvm[86623]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:33 np0005580781 lvm[86623]: VG ceph_vg1 finished
Jan 10 11:58:33 np0005580781 lvm[86625]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:58:33 np0005580781 lvm[86625]: VG ceph_vg2 finished
Jan 10 11:58:33 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 11:58:33 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 bash[86522]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 11:58:33 np0005580781 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 bash[86522]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:33 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 11:58:33 np0005580781 bash[86522]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 11:58:33 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 10 11:58:33 np0005580781 bash[86522]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 10 11:58:33 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:34 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 bash[86522]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 bash[86522]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 11:58:34 np0005580781 bash[86522]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 10 11:58:34 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 11:58:34 np0005580781 bash[86522]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 10 11:58:34 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate[86537]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 10 11:58:34 np0005580781 bash[86522]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 10 11:58:34 np0005580781 systemd[1]: libpod-eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304.scope: Deactivated successfully.
Jan 10 11:58:34 np0005580781 podman[86522]: 2026-01-10 16:58:34.108664365 +0000 UTC m=+1.888045810 container died eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 11:58:34 np0005580781 systemd[1]: libpod-eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304.scope: Consumed 1.860s CPU time.
Jan 10 11:58:34 np0005580781 systemd[1]: var-lib-containers-storage-overlay-38db49f157b25ff43141dccfb0d9c9d85c8ab9fc04ed9726c501d8bc7102b743-merged.mount: Deactivated successfully.
Jan 10 11:58:34 np0005580781 podman[86522]: 2026-01-10 16:58:34.224962895 +0000 UTC m=+2.004344330 container remove eb455802a229983b59b0ec7b48445ef5acf6c4f8546e7f0d29c89a4c6a980304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 11:58:34 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:34 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:34 np0005580781 podman[86790]: 2026-01-10 16:58:34.476429429 +0000 UTC m=+0.027745432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:34 np0005580781 podman[86790]: 2026-01-10 16:58:34.573330868 +0000 UTC m=+0.124646841 container create 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d223e993025546a65a9f275da11fc8fb7498cccda2489d98879e97dd1dc74a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:34 np0005580781 podman[86790]: 2026-01-10 16:58:34.683414659 +0000 UTC m=+0.234730662 container init 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 11:58:34 np0005580781 podman[86790]: 2026-01-10 16:58:34.69316237 +0000 UTC m=+0.244478343 container start 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:34 np0005580781 bash[86790]: 2086bc4111bf1ec90a2c9e86b7eed8efdf33f3d17156c7273e188563a328765d
Jan 10 11:58:34 np0005580781 systemd[1]: Started Ceph osd.1 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: pidfile_write: ignore empty --pid-file
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:34 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:34 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 10 11:58:34 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee400 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953ee000 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 10 11:58:34 np0005580781 ceph-mgr[75538]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: load: jerasure load: lrc 
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:34 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d5953efc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount shared_bdev_used = 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Git sha 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: DB SUMMARY
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: DB Session ID:  THG7H36IMJBHPS2MRPBN
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                     Options.env: 0x55d5960cbc00
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                Options.info_log: 0x55d5962ce900
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.write_buffer_manager: 0x55d596172b40
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.row_cache: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                              Options.wal_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.wal_compression: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_background_jobs: 4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Compression algorithms supported:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kZSTD supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cecc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cece0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d595283a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cece0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d595283a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cece0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d595283a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 14842030-9ecb-4d28-b0e7-76776ffb878c
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315161751, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315165205, "job": 1, "event": "recovery_finished"}
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: freelist init
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: freelist _read_cfg
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs umount
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) close
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bdev(0x55d596085800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluefs mount shared_bdev_used = 27262976
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Git sha 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: DB SUMMARY
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: DB Session ID:  THG7H36IMJBHPS2MRPBM
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                     Options.env: 0x55d59527fab0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                Options.info_log: 0x55d5962cea80
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.write_buffer_manager: 0x55d596173900
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.row_cache: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                              Options.wal_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.wal_compression: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_background_jobs: 4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Compression algorithms supported:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kZSTD supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d5952838d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d595283a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d595283a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d5962cff80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d595283a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 14842030-9ecb-4d28-b0e7-76776ffb878c
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315217323, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315237076, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "14842030-9ecb-4d28-b0e7-76776ffb878c", "db_session_id": "THG7H36IMJBHPS2MRPBM", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315242458, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "14842030-9ecb-4d28-b0e7-76776ffb878c", "db_session_id": "THG7H36IMJBHPS2MRPBM", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315267274, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "14842030-9ecb-4d28-b0e7-76776ffb878c", "db_session_id": "THG7H36IMJBHPS2MRPBM", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064315270778, "job": 1, "event": "recovery_finished"}
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 10 11:58:35 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:35 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d5964e8000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: DB pointer 0x55d596488000
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 460.80 MB usag
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: _get_class not permitted to load lua
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: _get_class not permitted to load sdk
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1 0 load_pgs
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1 0 load_pgs opened 0 pgs
Jan 10 11:58:35 np0005580781 ceph-osd[86809]: osd.1 0 log_to_monitors true
Jan 10 11:58:35 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1[86805]: 2026-01-10T16:58:35.383+0000 7f1fce8078c0 -1 osd.1 0 log_to_monitors true
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 10 11:58:35 np0005580781 podman[87315]: 2026-01-10 16:58:35.463790612 +0000 UTC m=+0.092818153 container create 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 11:58:35 np0005580781 podman[87315]: 2026-01-10 16:58:35.416347481 +0000 UTC m=+0.045375092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:35 np0005580781 systemd[1]: Started libpod-conmon-06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382.scope.
Jan 10 11:58:35 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:35 np0005580781 podman[87315]: 2026-01-10 16:58:35.60530672 +0000 UTC m=+0.234334271 container init 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:35 np0005580781 podman[87315]: 2026-01-10 16:58:35.613996421 +0000 UTC m=+0.243023962 container start 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:35 np0005580781 determined_gagarin[87364]: 167 167
Jan 10 11:58:35 np0005580781 systemd[1]: libpod-06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382.scope: Deactivated successfully.
Jan 10 11:58:35 np0005580781 podman[87315]: 2026-01-10 16:58:35.630306562 +0000 UTC m=+0.259334143 container attach 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:35 np0005580781 podman[87315]: 2026-01-10 16:58:35.632815724 +0000 UTC m=+0.261843285 container died 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 11:58:35 np0005580781 systemd[1]: var-lib-containers-storage-overlay-dbfe068d21816042f7fc0b6bf446f571fd24a871e572d1debb0fd9dfdb9aa207-merged.mount: Deactivated successfully.
Jan 10 11:58:35 np0005580781 podman[87315]: 2026-01-10 16:58:35.711800596 +0000 UTC m=+0.340828137 container remove 06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:35 np0005580781 systemd[1]: libpod-conmon-06751f32c88e4ae843dd95464f117eec884a10f2c234078dd4b57d0b290f6382.scope: Deactivated successfully.
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: Deploying daemon osd.2 on compute-0
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:35 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:35 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:35 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:35 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 10 11:58:36 np0005580781 podman[87394]: 2026-01-10 16:58:36.034598501 +0000 UTC m=+0.083779301 container create 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 11:58:36 np0005580781 podman[87394]: 2026-01-10 16:58:35.990836647 +0000 UTC m=+0.040017527 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:36 np0005580781 systemd[1]: Started libpod-conmon-19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab.scope.
Jan 10 11:58:36 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:36 np0005580781 podman[87394]: 2026-01-10 16:58:36.173076141 +0000 UTC m=+0.222256951 container init 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:36 np0005580781 podman[87394]: 2026-01-10 16:58:36.194200041 +0000 UTC m=+0.243380831 container start 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 11:58:36 np0005580781 podman[87394]: 2026-01-10 16:58:36.20178609 +0000 UTC m=+0.250966890 container attach 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 11:58:36 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/150683745; not ready for session (expect reconnect)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:36 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 10 11:58:36 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test[87409]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 10 11:58:36 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test[87409]:                            [--no-systemd] [--no-tmpfs]
Jan 10 11:58:36 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test[87409]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.911 iops: 4329.178 elapsed_sec: 0.693
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: log_channel(cluster) log [WRN] : OSD bench result of 4329.177571 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 0 waiting for initial osdmap
Jan 10 11:58:36 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0[85760]: 2026-01-10T16:58:36.742+0000 7f0516f54640 -1 osd.0 0 waiting for initial osdmap
Jan 10 11:58:36 np0005580781 systemd[1]: libpod-19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab.scope: Deactivated successfully.
Jan 10 11:58:36 np0005580781 podman[87394]: 2026-01-10 16:58:36.747803972 +0000 UTC m=+0.796984752 container died 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Jan 10 11:58:36 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-0[85760]: 2026-01-10T16:58:36.774+0000 7f0511d59640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 10 11:58:36 np0005580781 systemd[1]: var-lib-containers-storage-overlay-952bd10987bd926e79e48b649f1fc478281861fbf2d9c0c841864d800228bcd4-merged.mount: Deactivated successfully.
Jan 10 11:58:36 np0005580781 podman[87394]: 2026-01-10 16:58:36.796811088 +0000 UTC m=+0.845991868 container remove 19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 11:58:36 np0005580781 systemd[1]: libpod-conmon-19fe0021d6c9a3f88e72fd0b56fd4f8613104b58cb828f8c6d652fcee3976cab.scope: Deactivated successfully.
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: osd.1 0 done with init, starting boot process
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: osd.1 0 start_boot
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 10 11:58:36 np0005580781 ceph-osd[86809]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745] boot
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 10 11:58:36 np0005580781 ceph-osd[85764]: osd.0 10 state: booting -> active
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:36 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:36 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: OSD bench result of 4329.177571 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 11:58:36 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:36 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:36 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] creating mgr pool
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 10 11:58:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 10 11:58:37 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:37 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:37 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:37 np0005580781 systemd[1]: Reloading.
Jan 10 11:58:37 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:58:37 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 10 11:58:37 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 10 11:58:37 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 10 11:58:37 np0005580781 ceph-osd[85764]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 10 11:58:37 np0005580781 ceph-osd[85764]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 10 11:58:37 np0005580781 ceph-osd[85764]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 10 11:58:37 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:37 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 11:58:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:58:37
Jan 10 11:58:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 11:58:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 10 11:58:38 np0005580781 systemd[1]: Starting Ceph osd.2 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: from='osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: osd.0 [v2:192.168.122.100:6802/150683745,v1:192.168.122.100:6803/150683745] boot
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:38 np0005580781 podman[87571]: 2026-01-10 16:58:38.393645277 +0000 UTC m=+0.045476995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:38 np0005580781 podman[87571]: 2026-01-10 16:58:38.677392774 +0000 UTC m=+0.329224432 container create 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:38 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 21470642176
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:58:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:58:38 np0005580781 podman[87571]: 2026-01-10 16:58:38.958076832 +0000 UTC m=+0.609908510 container init 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 11:58:38 np0005580781 podman[87571]: 2026-01-10 16:58:38.96598409 +0000 UTC m=+0.617815758 container start 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:38 np0005580781 podman[87571]: 2026-01-10 16:58:38.982866998 +0000 UTC m=+0.634698656 container attach 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:39 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:39 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:39 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:39 np0005580781 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:39 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:39 np0005580781 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:39 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 11:58:40 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 10 11:58:40 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:40 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:40 np0005580781 lvm[87670]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:58:40 np0005580781 lvm[87670]: VG ceph_vg0 finished
Jan 10 11:58:40 np0005580781 lvm[87673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:40 np0005580781 lvm[87673]: VG ceph_vg1 finished
Jan 10 11:58:40 np0005580781 lvm[87674]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:58:40 np0005580781 lvm[87674]: VG ceph_vg2 finished
Jan 10 11:58:40 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 11:58:40 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:40 np0005580781 bash[87571]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 10 11:58:40 np0005580781 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:40 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:40 np0005580781 bash[87571]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 10 11:58:40 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:40 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:41 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 11:58:41 np0005580781 bash[87571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 11:58:41 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 10 11:58:41 np0005580781 bash[87571]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 10 11:58:41 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:41 np0005580781 bash[87571]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:41 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:41 np0005580781 bash[87571]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:41 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 11:58:41 np0005580781 bash[87571]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 10 11:58:41 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 11:58:41 np0005580781 bash[87571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 10 11:58:41 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate[87586]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 10 11:58:41 np0005580781 bash[87571]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 10 11:58:41 np0005580781 systemd[1]: libpod-2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155.scope: Deactivated successfully.
Jan 10 11:58:41 np0005580781 systemd[1]: libpod-2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155.scope: Consumed 3.355s CPU time.
Jan 10 11:58:41 np0005580781 podman[87571]: 2026-01-10 16:58:41.373278749 +0000 UTC m=+3.025110438 container died 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 11:58:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2c66e972109d42f348690ddaf51cfcc895c09fcae6c35a28353c39c85cc058f7-merged.mount: Deactivated successfully.
Jan 10 11:58:41 np0005580781 podman[87571]: 2026-01-10 16:58:41.53634862 +0000 UTC m=+3.188180268 container remove 2ffd569d340b7a8e948f9a2dfb8b5f2c14518f0bdbdc1a6b93ae7ff33f5ee155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2-activate, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 11:58:41 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:41 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:41 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:41 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 10 11:58:42 np0005580781 podman[87846]: 2026-01-10 16:58:42.091850687 +0000 UTC m=+0.075448750 container create d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 11:58:42 np0005580781 podman[87846]: 2026-01-10 16:58:42.043321025 +0000 UTC m=+0.026919148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa4e01d184a4b0e13138d4f16877c4cf4d88fc3fc829c3d4c48fc02df9a1e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:42 np0005580781 podman[87846]: 2026-01-10 16:58:42.301418051 +0000 UTC m=+0.285016114 container init d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 11:58:42 np0005580781 podman[87846]: 2026-01-10 16:58:42.309097283 +0000 UTC m=+0.292695336 container start d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:42 np0005580781 bash[87846]: d71926618b5142732453f9e9d6aaf6a6a6d47a415c6ee57ff501346a0a585c15
Jan 10 11:58:42 np0005580781 systemd[1]: Started Ceph osd.2 for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: pidfile_write: ignore empty --pid-file
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014400 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de014000 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: load: jerasure load: lrc 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621de015c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount shared_bdev_used = 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Git sha 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: DB SUMMARY
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: DB Session ID:  D6JZOUT0P79SMS3CMH42
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                     Options.env: 0x5621ddea5ea0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                Options.info_log: 0x5621deef68a0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.write_buffer_manager: 0x5621ddf0ab40
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.row_cache: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                              Options.wal_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.wal_compression: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_background_jobs: 4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Compression algorithms supported:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kZSTD supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef6c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b77ccd9f-4f29-4234-a608-29d54f994fb8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322781962, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322784170, "job": 1, "event": "recovery_finished"}
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: freelist init
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: freelist _read_cfg
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs umount
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) close
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bdev(0x5621decab800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluefs mount shared_bdev_used = 27262976
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: RocksDB version: 7.9.2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Git sha 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: DB SUMMARY
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: DB Session ID:  D6JZOUT0P79SMS3CMH43
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: CURRENT file:  CURRENT
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: IDENTITY file:  IDENTITY
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.error_if_exists: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.create_if_missing: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.paranoid_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                     Options.env: 0x5621ddea5d50
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                Options.info_log: 0x5621deef7aa0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_file_opening_threads: 16
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                              Options.statistics: (nil)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.use_fsync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.max_log_file_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.allow_fallocate: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.use_direct_reads: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.create_missing_column_families: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                              Options.db_log_dir: 
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                                 Options.wal_dir: db.wal
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.advise_random_on_open: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.write_buffer_manager: 0x5621ddf0b900
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                            Options.rate_limiter: (nil)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.unordered_write: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.row_cache: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                              Options.wal_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.allow_ingest_behind: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.two_write_queues: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.manual_wal_flush: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.wal_compression: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.atomic_flush: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.log_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.allow_data_in_errors: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.db_host_id: __hostname__
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_background_jobs: 4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_background_compactions: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_subcompactions: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.max_open_files: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.bytes_per_sync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.max_background_flushes: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Compression algorithms supported:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kZSTD supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kXpressCompression supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kBZip2Compression supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kLZ4Compression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kZlibCompression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: #011kSnappyCompression supported: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ec0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea94b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ec0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea94b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:           Options.merge_operator: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.compaction_filter_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.sst_partitioner_factory: None
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621deef7ec0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5621ddea94b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.write_buffer_size: 16777216
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.max_write_buffer_number: 64
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.compression: LZ4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.num_levels: 7
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.level: 32767
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.compression_opts.strategy: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                  Options.compression_opts.enabled: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.arena_block_size: 1048576
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.disable_auto_compactions: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.inplace_update_support: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.bloom_locality: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                    Options.max_successive_merges: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.paranoid_file_checks: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.force_consistency_checks: 1
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.report_bg_io_stats: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                               Options.ttl: 2592000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                       Options.enable_blob_files: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                           Options.min_blob_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                          Options.blob_file_size: 268435456
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb:                Options.blob_file_starting_level: 0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b77ccd9f-4f29-4234-a608-29d54f994fb8
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322840883, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322845090, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b77ccd9f-4f29-4234-a608-29d54f994fb8", "db_session_id": "D6JZOUT0P79SMS3CMH43", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322847893, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b77ccd9f-4f29-4234-a608-29d54f994fb8", "db_session_id": "D6JZOUT0P79SMS3CMH43", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322850789, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b77ccd9f-4f29-4234-a608-29d54f994fb8", "db_session_id": "D6JZOUT0P79SMS3CMH43", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064322852236, "job": 1, "event": "recovery_finished"}
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 10 11:58:42 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:42 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5621df0ffc00
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: DB pointer 0x5621df0b0000
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 460.80 MB usag
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: _get_class not permitted to load lua
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: _get_class not permitted to load sdk
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2 0 load_pgs
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2 0 load_pgs opened 0 pgs
Jan 10 11:58:42 np0005580781 ceph-osd[87867]: osd.2 0 log_to_monitors true
Jan 10 11:58:42 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2[87861]: 2026-01-10T16:58:42.952+0000 7f9711cfc8c0 -1 osd.2 0 log_to_monitors true
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 10 11:58:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 10 11:58:43 np0005580781 podman[88377]: 2026-01-10 16:58:43.051453538 +0000 UTC m=+0.060045746 container create c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:43 np0005580781 podman[88377]: 2026-01-10 16:58:43.025248841 +0000 UTC m=+0.033841049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:43 np0005580781 systemd[1]: Started libpod-conmon-c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff.scope.
Jan 10 11:58:43 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 10 11:58:43 np0005580781 podman[88377]: 2026-01-10 16:58:43.210592005 +0000 UTC m=+0.219184203 container init c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 11:58:43 np0005580781 podman[88377]: 2026-01-10 16:58:43.220906083 +0000 UTC m=+0.229498281 container start c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 11:58:43 np0005580781 zealous_swartz[88393]: 167 167
Jan 10 11:58:43 np0005580781 systemd[1]: libpod-c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff.scope: Deactivated successfully.
Jan 10 11:58:43 np0005580781 podman[88377]: 2026-01-10 16:58:43.241405575 +0000 UTC m=+0.249997813 container attach c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 11:58:43 np0005580781 podman[88377]: 2026-01-10 16:58:43.242256339 +0000 UTC m=+0.250848577 container died c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3b2905c7d292de9b1c5b403bde5a4d5c8d4a427e52b431aa7ee3a18b1c4ab687-merged.mount: Deactivated successfully.
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:43 np0005580781 podman[88377]: 2026-01-10 16:58:43.316171355 +0000 UTC m=+0.324763593 container remove c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 11:58:43 np0005580781 systemd[1]: libpod-conmon-c2c95228d259496a3a2d0b6085239e309b4c395cc2f104041f0ff060488fa8ff.scope: Deactivated successfully.
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:43 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:43 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 10 11:58:43 np0005580781 podman[88418]: 2026-01-10 16:58:43.584348981 +0000 UTC m=+0.065359869 container create 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 11:58:43 np0005580781 podman[88418]: 2026-01-10 16:58:43.546485607 +0000 UTC m=+0.027496575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:43 np0005580781 systemd[1]: Started libpod-conmon-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope.
Jan 10 11:58:43 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:43 np0005580781 podman[88418]: 2026-01-10 16:58:43.866241464 +0000 UTC m=+0.347252342 container init 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 11:58:43 np0005580781 podman[88418]: 2026-01-10 16:58:43.879415914 +0000 UTC m=+0.360426762 container start 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:43 np0005580781 podman[88418]: 2026-01-10 16:58:43.883146232 +0000 UTC m=+0.364157120 container attach 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:43 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2762500155; not ready for session (expect reconnect)
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:43 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 10 11:58:43 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 10 11:58:43 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 10 11:58:43 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 11.559 iops: 2959.228 elapsed_sec: 1.014
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: log_channel(cluster) log [WRN] : OSD bench result of 2959.227629 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 0 waiting for initial osdmap
Jan 10 11:58:44 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1[86805]: 2026-01-10T16:58:44.318+0000 7f1fcaf9b640 -1 osd.1 0 waiting for initial osdmap
Jan 10 11:58:44 np0005580781 python3[88466]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 10 11:58:44 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-1[86805]: 2026-01-10T16:58:44.351+0000 7f1fc558e640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 11:58:44 np0005580781 podman[88480]: 2026-01-10 16:58:44.415856391 +0000 UTC m=+0.063801044 container create 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:44 np0005580781 systemd[1]: Started libpod-conmon-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope.
Jan 10 11:58:44 np0005580781 podman[88480]: 2026-01-10 16:58:44.385804583 +0000 UTC m=+0.033749256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:44 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 11:58:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 10 11:58:44 np0005580781 ceph-osd[87867]: osd.2 0 done with init, starting boot process
Jan 10 11:58:44 np0005580781 ceph-osd[87867]: osd.2 0 start_boot
Jan 10 11:58:44 np0005580781 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 10 11:58:44 np0005580781 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 10 11:58:44 np0005580781 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 10 11:58:44 np0005580781 ceph-osd[87867]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 10 11:58:44 np0005580781 ceph-osd[87867]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155] boot
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 10 11:58:44 np0005580781 podman[88480]: 2026-01-10 16:58:44.513279535 +0000 UTC m=+0.161224188 container init 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:44 np0005580781 podman[88480]: 2026-01-10 16:58:44.522739488 +0000 UTC m=+0.170684141 container start 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:44 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: OSD bench result of 2959.227629 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 11:58:44 np0005580781 podman[88480]: 2026-01-10 16:58:44.546204036 +0000 UTC m=+0.194148719 container attach 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:44 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 14 state: booting -> active
Jan 10 11:58:44 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:44 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:44 np0005580781 lvm[88581]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:44 np0005580781 lvm[88581]: VG ceph_vg1 finished
Jan 10 11:58:44 np0005580781 lvm[88580]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:58:44 np0005580781 lvm[88580]: VG ceph_vg0 finished
Jan 10 11:58:44 np0005580781 lvm[88583]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:58:44 np0005580781 lvm[88583]: VG ceph_vg2 finished
Jan 10 11:58:45 np0005580781 objective_wilson[88436]: {}
Jan 10 11:58:45 np0005580781 systemd[1]: libpod-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope: Deactivated successfully.
Jan 10 11:58:45 np0005580781 podman[88418]: 2026-01-10 16:58:45.065291291 +0000 UTC m=+1.546302169 container died 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:45 np0005580781 systemd[1]: libpod-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope: Consumed 1.755s CPU time.
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904687390' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 11:58:45 np0005580781 hopeful_ride[88511]: 
Jan 10 11:58:45 np0005580781 hopeful_ride[88511]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":86,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1768064324,"num_in_osds":3,"osd_in_since":1768064301,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":27611136,"bytes_avail":21443031040,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-10T16:57:15:771836+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-10T16:58:41.970835+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 10 11:58:45 np0005580781 systemd[1]: libpod-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope: Deactivated successfully.
Jan 10 11:58:45 np0005580781 conmon[88511]: conmon 7e0e74b231e5d34edc44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope/container/memory.events
Jan 10 11:58:45 np0005580781 podman[88480]: 2026-01-10 16:58:45.11511295 +0000 UTC m=+0.763057613 container died 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:58:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0cde86d6acb9b2e6f1a661c8ca32fdec413f2e4425ad066d50fa16aa28c40775-merged.mount: Deactivated successfully.
Jan 10 11:58:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b54a87512fc726cf6b9881bcfb93fbab535826607717cd1a93ef9bbca2832cd3-merged.mount: Deactivated successfully.
Jan 10 11:58:45 np0005580781 podman[88418]: 2026-01-10 16:58:45.370098417 +0000 UTC m=+1.851109265 container remove 4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 11:58:45 np0005580781 systemd[1]: libpod-conmon-4eec8c557589389180d975d1167ed696676ccb0974c3883d247a3ecbd9a1edaf.scope: Deactivated successfully.
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:45 np0005580781 podman[88480]: 2026-01-10 16:58:45.457022127 +0000 UTC m=+1.104966820 container remove 7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:45 np0005580781 systemd[1]: libpod-conmon-7e0e74b231e5d34edc44606d8e871521035a998d83a0faf865f23439f519f18a.scope: Deactivated successfully.
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 10 11:58:45 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:45 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:45 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: from='osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: osd.1 [v2:192.168.122.100:6806/2762500155,v1:192.168.122.100:6807/2762500155] boot
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:45 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:45 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:58:45 np0005580781 python3[88708]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:45 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 10 11:58:46 np0005580781 podman[88709]: 2026-01-10 16:58:46.106682384 +0000 UTC m=+0.119420851 container create c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:46 np0005580781 systemd[1]: Started libpod-conmon-c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf.scope.
Jan 10 11:58:46 np0005580781 podman[88709]: 2026-01-10 16:58:46.058197004 +0000 UTC m=+0.070935531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:46 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afe3dfa6e7d2135243521975c05c701590b688e4526b493731b2251ea9a328/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afe3dfa6e7d2135243521975c05c701590b688e4526b493731b2251ea9a328/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:46 np0005580781 podman[88709]: 2026-01-10 16:58:46.224108637 +0000 UTC m=+0.236847114 container init c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:46 np0005580781 podman[88709]: 2026-01-10 16:58:46.237929006 +0000 UTC m=+0.250667493 container start c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 11:58:46 np0005580781 podman[88709]: 2026-01-10 16:58:46.264877194 +0000 UTC m=+0.277615661 container attach c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 11:58:46 np0005580781 podman[88768]: 2026-01-10 16:58:46.343906277 +0000 UTC m=+0.083839183 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:46 np0005580781 podman[88768]: 2026-01-10 16:58:46.441209338 +0000 UTC m=+0.181142214 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:46 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 11:58:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:46 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 10 11:58:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 10 11:58:46 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 10 11:58:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:46 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:47 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:47 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 10 11:58:47 np0005580781 charming_bhabha[88752]: pool 'vms' created
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:47 np0005580781 systemd[1]: libpod-c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf.scope: Deactivated successfully.
Jan 10 11:58:47 np0005580781 podman[88709]: 2026-01-10 16:58:47.677816939 +0000 UTC m=+1.690555476 container died c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:47 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay-78afe3dfa6e7d2135243521975c05c701590b688e4526b493731b2251ea9a328-merged.mount: Deactivated successfully.
Jan 10 11:58:47 np0005580781 podman[88709]: 2026-01-10 16:58:47.845108832 +0000 UTC m=+1.857847409 container remove c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf (image=quay.io/ceph/ceph:v20, name=charming_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 11:58:47 np0005580781 systemd[1]: libpod-conmon-c1f53533bdb10eab5e17fa128a38158a6aa0cfa80b7111a59b0776fc73cb10cf.scope: Deactivated successfully.
Jan 10 11:58:47 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v43: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 10 11:58:48 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] creating main.db for devicehealth
Jan 10 11:58:48 np0005580781 python3[89029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:48 np0005580781 podman[89042]: 2026-01-10 16:58:48.264159697 +0000 UTC m=+0.095850949 container create a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:48 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 11:58:48 np0005580781 podman[89042]: 2026-01-10 16:58:48.208428667 +0000 UTC m=+0.040119919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:48 np0005580781 ceph-mgr[75538]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 10 11:58:48 np0005580781 systemd[1]: Started libpod-conmon-a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a.scope.
Jan 10 11:58:48 np0005580781 podman[89059]: 2026-01-10 16:58:48.337427644 +0000 UTC m=+0.089629220 container create c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:58:48 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:48 np0005580781 podman[89059]: 2026-01-10 16:58:48.275840525 +0000 UTC m=+0.028042121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 10 11:58:48 np0005580781 podman[89042]: 2026-01-10 16:58:48.385338408 +0000 UTC m=+0.217029710 container init a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:48 np0005580781 podman[89042]: 2026-01-10 16:58:48.39302643 +0000 UTC m=+0.224717702 container start a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 11:58:48 np0005580781 jovial_jackson[89082]: 167 167
Jan 10 11:58:48 np0005580781 systemd[1]: Started libpod-conmon-c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d.scope.
Jan 10 11:58:48 np0005580781 systemd[1]: libpod-a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a.scope: Deactivated successfully.
Jan 10 11:58:48 np0005580781 podman[89042]: 2026-01-10 16:58:48.413172182 +0000 UTC m=+0.244863524 container attach a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:48 np0005580781 podman[89042]: 2026-01-10 16:58:48.413585494 +0000 UTC m=+0.245276746 container died a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:48 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a71be2d527c3ec85baf8661916f2a932b2a9742091f6118d2abd7267b41b04d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a71be2d527c3ec85baf8661916f2a932b2a9742091f6118d2abd7267b41b04d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:48 np0005580781 systemd[1]: var-lib-containers-storage-overlay-bc322d6120f1e5e94c933b7a3fba9ccb529281495a09e47565c7006214a252c3-merged.mount: Deactivated successfully.
Jan 10 11:58:48 np0005580781 podman[89042]: 2026-01-10 16:58:48.542288532 +0000 UTC m=+0.373979794 container remove a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 10 11:58:48 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:48 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:48 np0005580781 systemd[1]: libpod-conmon-a68429e359b0b272559bd933bec517b9dc670f97169238bf0fd19281b08d366a.scope: Deactivated successfully.
Jan 10 11:58:48 np0005580781 podman[89059]: 2026-01-10 16:58:48.575659446 +0000 UTC m=+0.327861052 container init c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 11:58:48 np0005580781 podman[89059]: 2026-01-10 16:58:48.590071732 +0000 UTC m=+0.342273318 container start c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:48 np0005580781 podman[89059]: 2026-01-10 16:58:48.59381453 +0000 UTC m=+0.346016116 container attach c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3048395201' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 10 11:58:48 np0005580781 ceph-mon[75249]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 10 11:58:48 np0005580781 podman[89115]: 2026-01-10 16:58:48.75858238 +0000 UTC m=+0.032775948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:48 np0005580781 podman[89115]: 2026-01-10 16:58:48.941763711 +0000 UTC m=+0.215957279 container create aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:48 np0005580781 systemd[1]: Started libpod-conmon-aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69.scope.
Jan 10 11:58:49 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:49 np0005580781 podman[89115]: 2026-01-10 16:58:49.034003756 +0000 UTC m=+0.308197314 container init aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 11:58:49 np0005580781 podman[89115]: 2026-01-10 16:58:49.040978158 +0000 UTC m=+0.315171696 container start aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 10 11:58:49 np0005580781 podman[89115]: 2026-01-10 16:58:49.050726799 +0000 UTC m=+0.324920357 container attach aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mgr[75538]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4214371536; not ready for session (expect reconnect)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mgr[75538]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.611 iops: 5020.469 elapsed_sec: 0.598
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: log_channel(cluster) log [WRN] : OSD bench result of 5020.468677 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 0 waiting for initial osdmap
Jan 10 11:58:49 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2[87861]: 2026-01-10T16:58:49.547+0000 7f970e490640 -1 osd.2 0 waiting for initial osdmap
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 17 check_osdmap_features require_osd_release unknown -> tentacle
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 17 set_numa_affinity not setting numa affinity
Jan 10 11:58:49 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-osd-2[87861]: 2026-01-10T16:58:49.579+0000 7f9708a83640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 10 11:58:49 np0005580781 sad_jemison[89093]: pool 'volumes' created
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.mkxlpr(active, since 72s)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536] boot
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 10 11:58:49 np0005580781 systemd[1]: libpod-c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d.scope: Deactivated successfully.
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 18 state: booting -> active
Jan 10 11:58:49 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:58:49 np0005580781 podman[89565]: 2026-01-10 16:58:49.815401479 +0000 UTC m=+0.030598715 container died c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]: [
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:    {
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "available": false,
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "being_replaced": false,
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "ceph_device_lvm": false,
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "lsm_data": {},
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "lvs": [],
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "path": "/dev/sr0",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "rejected_reasons": [
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "Has a FileSystem",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "Insufficient space (<5GB)"
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        ],
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        "sys_api": {
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "actuators": null,
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "device_nodes": [
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:                "sr0"
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            ],
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "devname": "sr0",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "human_readable_size": "482.00 KB",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "id_bus": "ata",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "model": "QEMU DVD-ROM",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "nr_requests": "2",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "parent": "/dev/sr0",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "partitions": {},
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "path": "/dev/sr0",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "removable": "1",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "rev": "2.5+",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "ro": "0",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "rotational": "1",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "sas_address": "",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "sas_device_handle": "",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "scheduler_mode": "mq-deadline",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "sectors": 0,
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "sectorsize": "2048",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "size": 493568.0,
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "support_discard": "2048",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "type": "disk",
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:            "vendor": "QEMU"
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:        }
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]:    }
Jan 10 11:58:49 np0005580781 blissful_sinoussi[89150]: ]
Jan 10 11:58:49 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0a71be2d527c3ec85baf8661916f2a932b2a9742091f6118d2abd7267b41b04d-merged.mount: Deactivated successfully.
Jan 10 11:58:49 np0005580781 systemd[1]: libpod-aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69.scope: Deactivated successfully.
Jan 10 11:58:49 np0005580781 podman[89565]: 2026-01-10 16:58:49.864676462 +0000 UTC m=+0.079873698 container remove c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d (image=quay.io/ceph/ceph:v20, name=sad_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 11:58:49 np0005580781 podman[89115]: 2026-01-10 16:58:49.868638987 +0000 UTC m=+1.142832545 container died aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 11:58:49 np0005580781 systemd[1]: libpod-conmon-c82627a7b8af607a35c344ef6ef1f4b3d34e2f7db07c2878009eb6783f992f3d.scope: Deactivated successfully.
Jan 10 11:58:49 np0005580781 systemd[1]: var-lib-containers-storage-overlay-77ae59c8e4cd34ad6fb938fc9f9dca5d83295225c825805d657b2b187bcd47d0-merged.mount: Deactivated successfully.
Jan 10 11:58:49 np0005580781 podman[89115]: 2026-01-10 16:58:49.91166544 +0000 UTC m=+1.185858988 container remove aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 11:58:49 np0005580781 systemd[1]: libpod-conmon-aa8f8aa280dd13ffa02f12c570d095c07914d642d5dc7faf32366d808735ff69.scope: Deactivated successfully.
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:49 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v45: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Jan 10 11:58:49 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 10 11:58:49 np0005580781 ceph-mgr[75538]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 10 11:58:49 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:50 np0005580781 python3[89889]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:50 np0005580781 podman[89917]: 2026-01-10 16:58:50.314044234 +0000 UTC m=+0.077426148 container create 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:50 np0005580781 systemd[1]: Started libpod-conmon-8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc.scope.
Jan 10 11:58:50 np0005580781 podman[89917]: 2026-01-10 16:58:50.286204429 +0000 UTC m=+0.049586383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:50 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:58:50 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:50 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a692eeb6b3eb84981408646517b84bf0fedc8cd66fb92e3938493a51e85eb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:50 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a692eeb6b3eb84981408646517b84bf0fedc8cd66fb92e3938493a51e85eb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:50 np0005580781 podman[89917]: 2026-01-10 16:58:50.419558832 +0000 UTC m=+0.182940806 container init 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 11:58:50 np0005580781 podman[89917]: 2026-01-10 16:58:50.426850073 +0000 UTC m=+0.190231947 container start 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:50 np0005580781 podman[89917]: 2026-01-10 16:58:50.430095626 +0000 UTC m=+0.193477590 container attach 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:50 np0005580781 podman[89946]: 2026-01-10 16:58:50.469136524 +0000 UTC m=+0.075781570 container create 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:50 np0005580781 systemd[1]: Started libpod-conmon-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope.
Jan 10 11:58:50 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:50 np0005580781 podman[89946]: 2026-01-10 16:58:50.434568816 +0000 UTC m=+0.041213912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:50 np0005580781 podman[89946]: 2026-01-10 16:58:50.53441678 +0000 UTC m=+0.141061836 container init 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 10 11:58:50 np0005580781 podman[89946]: 2026-01-10 16:58:50.545187121 +0000 UTC m=+0.151832147 container start 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:50 np0005580781 sad_wozniak[89964]: 167 167
Jan 10 11:58:50 np0005580781 systemd[1]: libpod-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope: Deactivated successfully.
Jan 10 11:58:50 np0005580781 podman[89946]: 2026-01-10 16:58:50.549753683 +0000 UTC m=+0.156398709 container attach 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 11:58:50 np0005580781 conmon[89964]: conmon 3f2057eae82e69604512 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope/container/memory.events
Jan 10 11:58:50 np0005580781 podman[89946]: 2026-01-10 16:58:50.550590577 +0000 UTC m=+0.157235603 container died 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 11:58:50 np0005580781 systemd[1]: var-lib-containers-storage-overlay-54968204b5744bff9e10cc19f78007e9ef5bd5d51398262992f6a6347f12dc79-merged.mount: Deactivated successfully.
Jan 10 11:58:50 np0005580781 podman[89946]: 2026-01-10 16:58:50.602855306 +0000 UTC m=+0.209500312 container remove 3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:50 np0005580781 systemd[1]: libpod-conmon-3f2057eae82e69604512f6b2d31250f4010d73700e61a8569e5a06a27c2d3957.scope: Deactivated successfully.
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: OSD bench result of 5020.468677 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/709839503' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: osd.2 [v2:192.168.122.100:6810/4214371536,v1:192.168.122.100:6811/4214371536] boot
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: Adjusting osd_memory_target on compute-0 to 43688k
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:58:50 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 pi=[17,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:58:50 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:58:50 np0005580781 podman[90007]: 2026-01-10 16:58:50.775643317 +0000 UTC m=+0.040199902 container create 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 11:58:50 np0005580781 systemd[1]: Started libpod-conmon-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope.
Jan 10 11:58:50 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:50 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:50 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:50 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:50 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:50 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:50 np0005580781 podman[90007]: 2026-01-10 16:58:50.758195503 +0000 UTC m=+0.022752118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:50 np0005580781 podman[90007]: 2026-01-10 16:58:50.886095998 +0000 UTC m=+0.150652633 container init 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 11:58:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:50 np0005580781 podman[90007]: 2026-01-10 16:58:50.892809592 +0000 UTC m=+0.157366177 container start 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:50 np0005580781 podman[90007]: 2026-01-10 16:58:50.896029805 +0000 UTC m=+0.160586390 container attach 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:51 np0005580781 hardcore_mccarthy[90025]: --> passed data devices: 0 physical, 3 LVM
Jan 10 11:58:51 np0005580781 hardcore_mccarthy[90025]: --> All data devices are unavailable
Jan 10 11:58:51 np0005580781 systemd[1]: libpod-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope: Deactivated successfully.
Jan 10 11:58:51 np0005580781 conmon[90025]: conmon 26aaeb6c554cddd08274 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope/container/memory.events
Jan 10 11:58:51 np0005580781 podman[90007]: 2026-01-10 16:58:51.453428227 +0000 UTC m=+0.717984822 container died 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:51 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a3644d3cac0e5e37fa4f21bfbcf9809c1c232747985294003468894bdfe34a46-merged.mount: Deactivated successfully.
Jan 10 11:58:51 np0005580781 podman[90007]: 2026-01-10 16:58:51.537532056 +0000 UTC m=+0.802088641 container remove 26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:51 np0005580781 systemd[1]: libpod-conmon-26aaeb6c554cddd0827474cbce707be348d176bf07c5e914dd8ba777f811a51c.scope: Deactivated successfully.
Jan 10 11:58:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 10 11:58:51 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 10 11:58:51 np0005580781 beautiful_mcnulty[89944]: pool 'backups' created
Jan 10 11:58:51 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 10 11:58:51 np0005580781 systemd[1]: libpod-8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc.scope: Deactivated successfully.
Jan 10 11:58:51 np0005580781 podman[89917]: 2026-01-10 16:58:51.796744274 +0000 UTC m=+1.560126148 container died 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:51 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c9a692eeb6b3eb84981408646517b84bf0fedc8cd66fb92e3938493a51e85eb2-merged.mount: Deactivated successfully.
Jan 10 11:58:51 np0005580781 podman[89917]: 2026-01-10 16:58:51.844893675 +0000 UTC m=+1.608275549 container remove 8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc (image=quay.io/ceph/ceph:v20, name=beautiful_mcnulty, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:58:51 np0005580781 systemd[1]: libpod-conmon-8bdb8c3f6f63cf59425e2305f62b2d93f007819fc5b09211f4bcbeedde247bdc.scope: Deactivated successfully.
Jan 10 11:58:51 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v48: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:58:52 np0005580781 podman[90157]: 2026-01-10 16:58:52.076005192 +0000 UTC m=+0.057236925 container create 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:52 np0005580781 systemd[1]: Started libpod-conmon-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope.
Jan 10 11:58:52 np0005580781 podman[90157]: 2026-01-10 16:58:52.044281135 +0000 UTC m=+0.025512968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:52 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:52 np0005580781 podman[90157]: 2026-01-10 16:58:52.185252587 +0000 UTC m=+0.166484350 container init 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:58:52 np0005580781 podman[90157]: 2026-01-10 16:58:52.193413023 +0000 UTC m=+0.174644756 container start 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 11:58:52 np0005580781 podman[90157]: 2026-01-10 16:58:52.196956496 +0000 UTC m=+0.178188259 container attach 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 11:58:52 np0005580781 modest_mahavira[90179]: 167 167
Jan 10 11:58:52 np0005580781 systemd[1]: libpod-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope: Deactivated successfully.
Jan 10 11:58:52 np0005580781 conmon[90179]: conmon 431392fee8a297f6a654 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope/container/memory.events
Jan 10 11:58:52 np0005580781 podman[90157]: 2026-01-10 16:58:52.201289741 +0000 UTC m=+0.182521474 container died 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 11:58:52 np0005580781 python3[90173]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:52 np0005580781 systemd[1]: var-lib-containers-storage-overlay-73f35c8eb60c0cb4e529a67725ef81b0207d2b85bd0b4e552de1ba37cc544a1c-merged.mount: Deactivated successfully.
Jan 10 11:58:52 np0005580781 podman[90157]: 2026-01-10 16:58:52.241481872 +0000 UTC m=+0.222713605 container remove 431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mahavira, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 11:58:52 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:58:52 np0005580781 systemd[1]: libpod-conmon-431392fee8a297f6a65413d61ff67c93a1660628f8ef4c82f315ea0cb1350935.scope: Deactivated successfully.
Jan 10 11:58:52 np0005580781 podman[90186]: 2026-01-10 16:58:52.283273189 +0000 UTC m=+0.052611251 container create c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 11:58:52 np0005580781 systemd[1]: Started libpod-conmon-c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe.scope.
Jan 10 11:58:52 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:52 np0005580781 podman[90186]: 2026-01-10 16:58:52.266070762 +0000 UTC m=+0.035408854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:52 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b13baeb0b7ee933361c5bd52c3e76b6afca1889412c05fe53f5c4950e9ebe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:52 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64b13baeb0b7ee933361c5bd52c3e76b6afca1889412c05fe53f5c4950e9ebe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:52 np0005580781 podman[90186]: 2026-01-10 16:58:52.378175801 +0000 UTC m=+0.147513883 container init c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:52 np0005580781 podman[90186]: 2026-01-10 16:58:52.386934984 +0000 UTC m=+0.156273096 container start c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:52 np0005580781 podman[90186]: 2026-01-10 16:58:52.391413363 +0000 UTC m=+0.160751435 container attach c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 11:58:52 np0005580781 podman[90220]: 2026-01-10 16:58:52.426176907 +0000 UTC m=+0.047270186 container create 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:52 np0005580781 systemd[1]: Started libpod-conmon-9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4.scope.
Jan 10 11:58:52 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:52 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:52 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:52 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:52 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:52 np0005580781 podman[90220]: 2026-01-10 16:58:52.406648313 +0000 UTC m=+0.027741602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:52 np0005580781 podman[90220]: 2026-01-10 16:58:52.506881479 +0000 UTC m=+0.127974768 container init 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 10 11:58:52 np0005580781 podman[90220]: 2026-01-10 16:58:52.521105829 +0000 UTC m=+0.142199108 container start 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:58:52 np0005580781 podman[90220]: 2026-01-10 16:58:52.525023973 +0000 UTC m=+0.146117302 container attach 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:52 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 10 11:58:52 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/4292381526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:52 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 10 11:58:52 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 10 11:58:52 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:58:52 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 11:58:52 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]: {
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:    "0": [
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:        {
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "devices": [
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "/dev/loop3"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            ],
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_name": "ceph_lv0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_size": "21470642176",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "name": "ceph_lv0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "tags": {
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.crush_device_class": "",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.encrypted": "0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osd_id": "0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.type": "block",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.vdo": "0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.with_tpm": "0"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            },
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "type": "block",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "vg_name": "ceph_vg0"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:        }
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:    ],
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:    "1": [
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:        {
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "devices": [
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "/dev/loop4"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            ],
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_name": "ceph_lv1",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_size": "21470642176",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "name": "ceph_lv1",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "tags": {
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.crush_device_class": "",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.encrypted": "0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osd_id": "1",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.type": "block",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.vdo": "0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.with_tpm": "0"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            },
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "type": "block",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "vg_name": "ceph_vg1"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:        }
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:    ],
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:    "2": [
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:        {
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "devices": [
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "/dev/loop5"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            ],
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_name": "ceph_lv2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_size": "21470642176",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "name": "ceph_lv2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "tags": {
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.crush_device_class": "",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.encrypted": "0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osd_id": "2",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.type": "block",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.vdo": "0",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:                "ceph.with_tpm": "0"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            },
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "type": "block",
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:            "vg_name": "ceph_vg2"
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:        }
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]:    ]
Jan 10 11:58:52 np0005580781 wonderful_shannon[90237]: }
Jan 10 11:58:52 np0005580781 systemd[1]: libpod-9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4.scope: Deactivated successfully.
Jan 10 11:58:52 np0005580781 podman[90220]: 2026-01-10 16:58:52.886152195 +0000 UTC m=+0.507245494 container died 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:52 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c6815ae859d545b08a4450b8d43ff1e9499bb15f93b1f05670cdc62d61a4dcbe-merged.mount: Deactivated successfully.
Jan 10 11:58:52 np0005580781 podman[90220]: 2026-01-10 16:58:52.936671754 +0000 UTC m=+0.557765043 container remove 9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:52 np0005580781 systemd[1]: libpod-conmon-9c66f10c8c865c36ebfd06387b00d915582c43a996b02a73105ae878bd3a6ff4.scope: Deactivated successfully.
Jan 10 11:58:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:53 np0005580781 podman[90342]: 2026-01-10 16:58:53.467967482 +0000 UTC m=+0.043914450 container create 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:53 np0005580781 systemd[1]: Started libpod-conmon-648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37.scope.
Jan 10 11:58:53 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:53 np0005580781 podman[90342]: 2026-01-10 16:58:53.546436848 +0000 UTC m=+0.122383856 container init 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 10 11:58:53 np0005580781 podman[90342]: 2026-01-10 16:58:53.45232304 +0000 UTC m=+0.028270018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:53 np0005580781 podman[90342]: 2026-01-10 16:58:53.555643254 +0000 UTC m=+0.131590242 container start 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 11:58:53 np0005580781 podman[90342]: 2026-01-10 16:58:53.559479645 +0000 UTC m=+0.135426623 container attach 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 11:58:53 np0005580781 friendly_hugle[90359]: 167 167
Jan 10 11:58:53 np0005580781 systemd[1]: libpod-648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37.scope: Deactivated successfully.
Jan 10 11:58:53 np0005580781 podman[90342]: 2026-01-10 16:58:53.561442342 +0000 UTC m=+0.137389320 container died 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:53 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b879e2ace163a989734e21f582848ce298ee764a608067259bd0f0c4ee8a8f60-merged.mount: Deactivated successfully.
Jan 10 11:58:53 np0005580781 podman[90342]: 2026-01-10 16:58:53.608076549 +0000 UTC m=+0.184023517 container remove 648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 10 11:58:53 np0005580781 systemd[1]: libpod-conmon-648c22547ea7c05c2fe77848fa1722264939d7f099a7488c890d263fcabdca37.scope: Deactivated successfully.
Jan 10 11:58:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 10 11:58:53 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:53 np0005580781 podman[90383]: 2026-01-10 16:58:53.794871095 +0000 UTC m=+0.045403432 container create 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 10 11:58:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 10 11:58:53 np0005580781 pensive_chaplygin[90212]: pool 'images' created
Jan 10 11:58:53 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 10 11:58:53 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:58:53 np0005580781 systemd[1]: Started libpod-conmon-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope.
Jan 10 11:58:53 np0005580781 systemd[1]: libpod-c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe.scope: Deactivated successfully.
Jan 10 11:58:53 np0005580781 podman[90186]: 2026-01-10 16:58:53.840862454 +0000 UTC m=+1.610200636 container died c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:58:53 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:53 np0005580781 podman[90383]: 2026-01-10 16:58:53.775207877 +0000 UTC m=+0.025740214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:53 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e64b13baeb0b7ee933361c5bd52c3e76b6afca1889412c05fe53f5c4950e9ebe-merged.mount: Deactivated successfully.
Jan 10 11:58:53 np0005580781 podman[90383]: 2026-01-10 16:58:53.882254349 +0000 UTC m=+0.132786716 container init 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:53 np0005580781 podman[90383]: 2026-01-10 16:58:53.892001661 +0000 UTC m=+0.142533978 container start 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 11:58:53 np0005580781 podman[90383]: 2026-01-10 16:58:53.897128239 +0000 UTC m=+0.147660576 container attach 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 11:58:53 np0005580781 podman[90186]: 2026-01-10 16:58:53.902663999 +0000 UTC m=+1.672002081 container remove c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe (image=quay.io/ceph/ceph:v20, name=pensive_chaplygin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 10 11:58:53 np0005580781 systemd[1]: libpod-conmon-c4043445b7ed4953de6504c483fd19fa67fb7ecb14e2c7a75b7c0ac0728f5efe.scope: Deactivated successfully.
Jan 10 11:58:53 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v51: 5 pgs: 2 creating+peering, 1 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:58:54 np0005580781 python3[90442]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:54 np0005580781 podman[90453]: 2026-01-10 16:58:54.261932316 +0000 UTC m=+0.048391929 container create f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:54 np0005580781 systemd[1]: Started libpod-conmon-f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a.scope.
Jan 10 11:58:54 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e545852a62a056df8ceb555dc2e81c2076abec15ecc3529ebf8bf77b7ed033/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e545852a62a056df8ceb555dc2e81c2076abec15ecc3529ebf8bf77b7ed033/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:54 np0005580781 podman[90453]: 2026-01-10 16:58:54.241313761 +0000 UTC m=+0.027773394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:54 np0005580781 podman[90453]: 2026-01-10 16:58:54.345982484 +0000 UTC m=+0.132442117 container init f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 11:58:54 np0005580781 podman[90453]: 2026-01-10 16:58:54.354379187 +0000 UTC m=+0.140838800 container start f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 11:58:54 np0005580781 podman[90453]: 2026-01-10 16:58:54.357897839 +0000 UTC m=+0.144357452 container attach f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:54 np0005580781 lvm[90553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:58:54 np0005580781 lvm[90553]: VG ceph_vg0 finished
Jan 10 11:58:54 np0005580781 lvm[90556]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:54 np0005580781 lvm[90556]: VG ceph_vg1 finished
Jan 10 11:58:54 np0005580781 lvm[90558]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:58:54 np0005580781 lvm[90558]: VG ceph_vg2 finished
Jan 10 11:58:54 np0005580781 lvm[90559]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:54 np0005580781 lvm[90559]: VG ceph_vg1 finished
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:54 np0005580781 lvm[90562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:58:54 np0005580781 lvm[90562]: VG ceph_vg1 finished
Jan 10 11:58:54 np0005580781 serene_hermann[90400]: {}
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 10 11:58:54 np0005580781 jovial_ritchie[90477]: pool 'cephfs.cephfs.meta' created
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 10 11:58:54 np0005580781 systemd[1]: libpod-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope: Deactivated successfully.
Jan 10 11:58:54 np0005580781 systemd[1]: libpod-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope: Consumed 1.489s CPU time.
Jan 10 11:58:54 np0005580781 podman[90383]: 2026-01-10 16:58:54.822904381 +0000 UTC m=+1.073436778 container died 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/366930870' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:58:54 np0005580781 systemd[1]: libpod-f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a.scope: Deactivated successfully.
Jan 10 11:58:54 np0005580781 podman[90453]: 2026-01-10 16:58:54.833312092 +0000 UTC m=+0.619771745 container died f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:54 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b68243fc54be9dcca46c4eb77e1668c9afc4b8dc8d11fa20059c3d1d9240604b-merged.mount: Deactivated successfully.
Jan 10 11:58:54 np0005580781 systemd[1]: var-lib-containers-storage-overlay-d9e545852a62a056df8ceb555dc2e81c2076abec15ecc3529ebf8bf77b7ed033-merged.mount: Deactivated successfully.
Jan 10 11:58:54 np0005580781 podman[90383]: 2026-01-10 16:58:54.880568497 +0000 UTC m=+1.131100804 container remove 0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 11:58:54 np0005580781 podman[90453]: 2026-01-10 16:58:54.902571323 +0000 UTC m=+0.689030976 container remove f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a (image=quay.io/ceph/ceph:v20, name=jovial_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:54 np0005580781 systemd[1]: libpod-conmon-0c7eab68423bad68e6c6f32827d26045380b9f7fba851fefb1ce98ba31ae2c76.scope: Deactivated successfully.
Jan 10 11:58:54 np0005580781 systemd[1]: libpod-conmon-f03d1b05fb82988e7de19255124aad5349fba952d952291d76c34278c9c7a92a.scope: Deactivated successfully.
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:55 np0005580781 python3[90651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:55 np0005580781 podman[90690]: 2026-01-10 16:58:55.290303333 +0000 UTC m=+0.042170449 container create 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:55 np0005580781 systemd[1]: Started libpod-conmon-28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e.scope.
Jan 10 11:58:55 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5094dc658af8368797ae236f72a654aab1b484ebab6a33d82ec391f65cfabadd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5094dc658af8368797ae236f72a654aab1b484ebab6a33d82ec391f65cfabadd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:55 np0005580781 podman[90690]: 2026-01-10 16:58:55.364733383 +0000 UTC m=+0.116600509 container init 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 11:58:55 np0005580781 podman[90690]: 2026-01-10 16:58:55.272393306 +0000 UTC m=+0.024260442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:55 np0005580781 podman[90690]: 2026-01-10 16:58:55.373119616 +0000 UTC m=+0.124986722 container start 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:58:55 np0005580781 podman[90690]: 2026-01-10 16:58:55.376338009 +0000 UTC m=+0.128205135 container attach 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 11:58:55 np0005580781 podman[90771]: 2026-01-10 16:58:55.653636159 +0000 UTC m=+0.060288342 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 11:58:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:58:55 np0005580781 podman[90771]: 2026-01-10 16:58:55.759414245 +0000 UTC m=+0.166066408 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/1969852647' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:55 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v54: 6 pgs: 4 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:58:56 np0005580781 podman[90987]: 2026-01-10 16:58:56.799256303 +0000 UTC m=+0.037694080 container create 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 10 11:58:56 np0005580781 magical_blackwell[90705]: pool 'cephfs.cephfs.data' created
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 10 11:58:56 np0005580781 systemd[1]: Started libpod-conmon-5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6.scope.
Jan 10 11:58:56 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:58:56 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/1756795060' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 10 11:58:56 np0005580781 podman[90690]: 2026-01-10 16:58:56.866059673 +0000 UTC m=+1.617926789 container died 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 11:58:56 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:56 np0005580781 systemd[1]: libpod-28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e.scope: Deactivated successfully.
Jan 10 11:58:56 np0005580781 podman[90987]: 2026-01-10 16:58:56.783279382 +0000 UTC m=+0.021717179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:56 np0005580781 podman[90987]: 2026-01-10 16:58:56.881843239 +0000 UTC m=+0.120281066 container init 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 11:58:56 np0005580781 systemd[1]: var-lib-containers-storage-overlay-5094dc658af8368797ae236f72a654aab1b484ebab6a33d82ec391f65cfabadd-merged.mount: Deactivated successfully.
Jan 10 11:58:56 np0005580781 podman[90987]: 2026-01-10 16:58:56.889870971 +0000 UTC m=+0.128308748 container start 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:56 np0005580781 epic_thompson[91004]: 167 167
Jan 10 11:58:56 np0005580781 podman[90987]: 2026-01-10 16:58:56.893474705 +0000 UTC m=+0.131912482 container attach 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:58:56 np0005580781 podman[90690]: 2026-01-10 16:58:56.918103086 +0000 UTC m=+1.669970192 container remove 28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e (image=quay.io/ceph/ceph:v20, name=magical_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 11:58:56 np0005580781 systemd[1]: libpod-5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6.scope: Deactivated successfully.
Jan 10 11:58:56 np0005580781 podman[90987]: 2026-01-10 16:58:56.921537136 +0000 UTC m=+0.159974923 container died 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:58:56 np0005580781 systemd[1]: libpod-conmon-28a1da4453d64d2ef05c3d558d640bf2dc4d1f8570a982aa651315d6728cdd0e.scope: Deactivated successfully.
Jan 10 11:58:56 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a402c461c307c871f8ba25c5ebb63bb8ab5085d27beb329c4e12e39e56048cdc-merged.mount: Deactivated successfully.
Jan 10 11:58:56 np0005580781 podman[90987]: 2026-01-10 16:58:56.960480571 +0000 UTC m=+0.198918348 container remove 5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 11:58:56 np0005580781 systemd[1]: libpod-conmon-5a9627f0a795cab30b1425b1b97ff035a3eb28f53a48fec49eebe3c7e570a1a6.scope: Deactivated successfully.
Jan 10 11:58:57 np0005580781 podman[91065]: 2026-01-10 16:58:57.150803268 +0000 UTC m=+0.047442631 container create d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 11:58:57 np0005580781 systemd[1]: Started libpod-conmon-d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2.scope.
Jan 10 11:58:57 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:57 np0005580781 podman[91065]: 2026-01-10 16:58:57.131742058 +0000 UTC m=+0.028381441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:57 np0005580781 podman[91065]: 2026-01-10 16:58:57.254200285 +0000 UTC m=+0.150839658 container init d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 11:58:57 np0005580781 podman[91065]: 2026-01-10 16:58:57.262772813 +0000 UTC m=+0.159412186 container start d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 11:58:57 np0005580781 python3[91073]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:57 np0005580781 podman[91065]: 2026-01-10 16:58:57.266565603 +0000 UTC m=+0.163204976 container attach d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:57 np0005580781 podman[91089]: 2026-01-10 16:58:57.350310322 +0000 UTC m=+0.067167502 container create b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 11:58:57 np0005580781 systemd[1]: Started libpod-conmon-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope.
Jan 10 11:58:57 np0005580781 podman[91089]: 2026-01-10 16:58:57.324270569 +0000 UTC m=+0.041127729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:57 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aaaec85c07836d384510f92b9a44557a3a6ace1808cacb9ccf71c9f336e2ff8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aaaec85c07836d384510f92b9a44557a3a6ace1808cacb9ccf71c9f336e2ff8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:57 np0005580781 podman[91089]: 2026-01-10 16:58:57.467167997 +0000 UTC m=+0.184025147 container init b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:57 np0005580781 podman[91089]: 2026-01-10 16:58:57.479084002 +0000 UTC m=+0.195941182 container start b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:57 np0005580781 podman[91089]: 2026-01-10 16:58:57.483472318 +0000 UTC m=+0.200329468 container attach b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:58:57 np0005580781 elated_kowalevski[91083]: --> passed data devices: 0 physical, 3 LVM
Jan 10 11:58:57 np0005580781 elated_kowalevski[91083]: --> All data devices are unavailable
Jan 10 11:58:57 np0005580781 systemd[1]: libpod-d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2.scope: Deactivated successfully.
Jan 10 11:58:57 np0005580781 podman[91065]: 2026-01-10 16:58:57.795653136 +0000 UTC m=+0.692292489 container died d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 11:58:57 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e70cf13c009cb2247ab01c31fea7177d4596e0f34a7770f60fecd7b36af8e321-merged.mount: Deactivated successfully.
Jan 10 11:58:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 10 11:58:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 10 11:58:57 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 10 11:58:57 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:58:57 np0005580781 podman[91065]: 2026-01-10 16:58:57.872346861 +0000 UTC m=+0.768986214 container remove d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 11:58:57 np0005580781 systemd[1]: libpod-conmon-d3cf5e933ea98439b68e79e51b337385a203e601eae8505421761cb9df389be2.scope: Deactivated successfully.
Jan 10 11:58:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 10 11:58:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 10 11:58:57 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:58:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:58:58 np0005580781 podman[91217]: 2026-01-10 16:58:58.377552705 +0000 UTC m=+0.046462253 container create 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:58:58 np0005580781 systemd[1]: Started libpod-conmon-40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767.scope.
Jan 10 11:58:58 np0005580781 podman[91217]: 2026-01-10 16:58:58.356650961 +0000 UTC m=+0.025560599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:58 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:58 np0005580781 podman[91217]: 2026-01-10 16:58:58.47360463 +0000 UTC m=+0.142514178 container init 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 11:58:58 np0005580781 podman[91217]: 2026-01-10 16:58:58.480660294 +0000 UTC m=+0.149569842 container start 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 11:58:58 np0005580781 podman[91217]: 2026-01-10 16:58:58.486000388 +0000 UTC m=+0.154909956 container attach 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:58 np0005580781 vibrant_elgamal[91234]: 167 167
Jan 10 11:58:58 np0005580781 systemd[1]: libpod-40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767.scope: Deactivated successfully.
Jan 10 11:58:58 np0005580781 podman[91217]: 2026-01-10 16:58:58.490039064 +0000 UTC m=+0.158948612 container died 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 11:58:58 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e3304e86ffa87b6ad6f9802d49d4acd04954fd9e27c74307caa839f1032a6919-merged.mount: Deactivated successfully.
Jan 10 11:58:58 np0005580781 podman[91217]: 2026-01-10 16:58:58.547426832 +0000 UTC m=+0.216336390 container remove 40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 11:58:58 np0005580781 systemd[1]: libpod-conmon-40de89d9fb3ec64d471c7eac240b5484a308654fa8b42e2e6d05e571e9940767.scope: Deactivated successfully.
Jan 10 11:58:58 np0005580781 podman[91257]: 2026-01-10 16:58:58.712056448 +0000 UTC m=+0.041431428 container create 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:58 np0005580781 systemd[1]: Started libpod-conmon-2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa.scope.
Jan 10 11:58:58 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:58 np0005580781 podman[91257]: 2026-01-10 16:58:58.696128788 +0000 UTC m=+0.025503818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:58 np0005580781 podman[91257]: 2026-01-10 16:58:58.800477262 +0000 UTC m=+0.129852262 container init 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 11:58:58 np0005580781 podman[91257]: 2026-01-10 16:58:58.808820043 +0000 UTC m=+0.138195023 container start 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:58:58 np0005580781 podman[91257]: 2026-01-10 16:58:58.812725036 +0000 UTC m=+0.142100046 container attach 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 10 11:58:58 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 10 11:58:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 10 11:58:58 np0005580781 heuristic_goodall[91104]: enabled application 'rbd' on pool 'vms'
Jan 10 11:58:58 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 10 11:58:58 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 10 11:58:58 np0005580781 systemd[1]: libpod-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope: Deactivated successfully.
Jan 10 11:58:58 np0005580781 conmon[91104]: conmon b0e8e61c8920648fda3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope/container/memory.events
Jan 10 11:58:58 np0005580781 podman[91089]: 2026-01-10 16:58:58.884092818 +0000 UTC m=+1.600949978 container died b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 11:58:58 np0005580781 systemd[1]: var-lib-containers-storage-overlay-7aaaec85c07836d384510f92b9a44557a3a6ace1808cacb9ccf71c9f336e2ff8-merged.mount: Deactivated successfully.
Jan 10 11:58:58 np0005580781 podman[91089]: 2026-01-10 16:58:58.926861063 +0000 UTC m=+1.643718193 container remove b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99 (image=quay.io/ceph/ceph:v20, name=heuristic_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 11:58:58 np0005580781 systemd[1]: libpod-conmon-b0e8e61c8920648fda3ab431ee0fa2438fd017990a7e4d378eaaf9aa17242a99.scope: Deactivated successfully.
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]: {
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:    "0": [
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:        {
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "devices": [
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "/dev/loop3"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            ],
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_name": "ceph_lv0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_size": "21470642176",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "name": "ceph_lv0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "tags": {
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.crush_device_class": "",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.encrypted": "0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osd_id": "0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.type": "block",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.vdo": "0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.with_tpm": "0"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            },
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "type": "block",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "vg_name": "ceph_vg0"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:        }
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:    ],
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:    "1": [
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:        {
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "devices": [
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "/dev/loop4"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            ],
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_name": "ceph_lv1",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_size": "21470642176",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "name": "ceph_lv1",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "tags": {
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.crush_device_class": "",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.encrypted": "0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osd_id": "1",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.type": "block",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.vdo": "0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.with_tpm": "0"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            },
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "type": "block",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "vg_name": "ceph_vg1"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:        }
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:    ],
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:    "2": [
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:        {
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "devices": [
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "/dev/loop5"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            ],
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_name": "ceph_lv2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_size": "21470642176",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "name": "ceph_lv2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "tags": {
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.cluster_name": "ceph",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.crush_device_class": "",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.encrypted": "0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.objectstore": "bluestore",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osd_id": "2",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.type": "block",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.vdo": "0",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:                "ceph.with_tpm": "0"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            },
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "type": "block",
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:            "vg_name": "ceph_vg2"
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:        }
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]:    ]
Jan 10 11:58:59 np0005580781 exciting_babbage[91274]: }
Jan 10 11:58:59 np0005580781 systemd[1]: libpod-2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa.scope: Deactivated successfully.
Jan 10 11:58:59 np0005580781 podman[91257]: 2026-01-10 16:58:59.139552647 +0000 UTC m=+0.468927637 container died 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:59 np0005580781 systemd[1]: var-lib-containers-storage-overlay-955656c6bbd7df3dc3fa281cb507e1a80f7a9a94d01355dbe37ef8cb5542655f-merged.mount: Deactivated successfully.
Jan 10 11:58:59 np0005580781 podman[91257]: 2026-01-10 16:58:59.187253045 +0000 UTC m=+0.516628035 container remove 2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:59 np0005580781 systemd[1]: libpod-conmon-2898db25555d9bbba5f35e356277d82d11d0a88700583de3c7c634aee78bb4fa.scope: Deactivated successfully.
Jan 10 11:58:59 np0005580781 python3[91319]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:58:59 np0005580781 podman[91345]: 2026-01-10 16:58:59.345804555 +0000 UTC m=+0.055942317 container create 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 11:58:59 np0005580781 systemd[1]: Started libpod-conmon-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope.
Jan 10 11:58:59 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f62e945bb43d1280efbf3cff596d143572ddc392a102c4cc3fcbb4d3480d02/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f62e945bb43d1280efbf3cff596d143572ddc392a102c4cc3fcbb4d3480d02/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:58:59 np0005580781 podman[91345]: 2026-01-10 16:58:59.322631646 +0000 UTC m=+0.032769448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:58:59 np0005580781 podman[91345]: 2026-01-10 16:58:59.430865753 +0000 UTC m=+0.141003545 container init 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 11:58:59 np0005580781 podman[91345]: 2026-01-10 16:58:59.440156171 +0000 UTC m=+0.150293963 container start 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:59 np0005580781 podman[91345]: 2026-01-10 16:58:59.444741353 +0000 UTC m=+0.154879125 container attach 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 11:58:59 np0005580781 podman[91432]: 2026-01-10 16:58:59.738615243 +0000 UTC m=+0.042167849 container create c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:58:59 np0005580781 systemd[1]: Started libpod-conmon-c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1.scope.
Jan 10 11:58:59 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:58:59 np0005580781 podman[91432]: 2026-01-10 16:58:59.817167682 +0000 UTC m=+0.120720308 container init c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:58:59 np0005580781 podman[91432]: 2026-01-10 16:58:59.721648813 +0000 UTC m=+0.025201449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:58:59 np0005580781 podman[91432]: 2026-01-10 16:58:59.825338528 +0000 UTC m=+0.128891134 container start c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 11:58:59 np0005580781 podman[91432]: 2026-01-10 16:58:59.829127817 +0000 UTC m=+0.132680423 container attach c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 10 11:58:59 np0005580781 adoring_cartwright[91448]: 167 167
Jan 10 11:58:59 np0005580781 systemd[1]: libpod-c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1.scope: Deactivated successfully.
Jan 10 11:58:59 np0005580781 podman[91432]: 2026-01-10 16:58:59.830487037 +0000 UTC m=+0.134039653 container died c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 11:58:59 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ff821b29b5f1ffe573444988a281ab06577a709c28b016d9df45f22c9c1b0dd6-merged.mount: Deactivated successfully.
Jan 10 11:58:59 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3426479788' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 10 11:58:59 np0005580781 podman[91432]: 2026-01-10 16:58:59.871149281 +0000 UTC m=+0.174701887 container remove c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:58:59 np0005580781 systemd[1]: libpod-conmon-c2987cc618f17e222f866538ded9c2174473e8e5fe0bfd2708c7a6fac6e4d3e1.scope: Deactivated successfully.
Jan 10 11:58:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 10 11:58:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 10 11:58:59 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:00 np0005580781 podman[91474]: 2026-01-10 16:59:00.03967358 +0000 UTC m=+0.045801295 container create b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:00 np0005580781 systemd[1]: Started libpod-conmon-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope.
Jan 10 11:59:00 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:00 np0005580781 podman[91474]: 2026-01-10 16:59:00.01858985 +0000 UTC m=+0.024717545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:00 np0005580781 podman[91474]: 2026-01-10 16:59:00.124193751 +0000 UTC m=+0.130321446 container init b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:00 np0005580781 podman[91474]: 2026-01-10 16:59:00.130456912 +0000 UTC m=+0.136584587 container start b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:00 np0005580781 podman[91474]: 2026-01-10 16:59:00.134076327 +0000 UTC m=+0.140204002 container attach b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 10 11:59:00 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 10 11:59:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 10 11:59:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 10 11:59:00 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 10 11:59:00 np0005580781 nervous_sutherland[91395]: enabled application 'rbd' on pool 'volumes'
Jan 10 11:59:00 np0005580781 systemd[1]: libpod-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope: Deactivated successfully.
Jan 10 11:59:00 np0005580781 conmon[91395]: conmon 67bb2766109512e04b03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope/container/memory.events
Jan 10 11:59:00 np0005580781 podman[91345]: 2026-01-10 16:59:00.909043673 +0000 UTC m=+1.619181455 container died 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:00 np0005580781 lvm[91568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:59:00 np0005580781 lvm[91571]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:59:00 np0005580781 lvm[91568]: VG ceph_vg0 finished
Jan 10 11:59:00 np0005580781 lvm[91571]: VG ceph_vg1 finished
Jan 10 11:59:00 np0005580781 systemd[1]: var-lib-containers-storage-overlay-58f62e945bb43d1280efbf3cff596d143572ddc392a102c4cc3fcbb4d3480d02-merged.mount: Deactivated successfully.
Jan 10 11:59:00 np0005580781 lvm[91583]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:59:00 np0005580781 lvm[91583]: VG ceph_vg2 finished
Jan 10 11:59:00 np0005580781 podman[91345]: 2026-01-10 16:59:00.966121062 +0000 UTC m=+1.676258834 container remove 67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8 (image=quay.io/ceph/ceph:v20, name=nervous_sutherland, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 11:59:00 np0005580781 systemd[1]: libpod-conmon-67bb2766109512e04b03955b1d2e89f7829e9cb734a2f10b0551c1fb204091a8.scope: Deactivated successfully.
Jan 10 11:59:01 np0005580781 great_antonelli[91490]: {}
Jan 10 11:59:01 np0005580781 systemd[1]: libpod-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope: Deactivated successfully.
Jan 10 11:59:01 np0005580781 podman[91474]: 2026-01-10 16:59:01.100821683 +0000 UTC m=+1.106949368 container died b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 11:59:01 np0005580781 systemd[1]: libpod-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope: Consumed 1.642s CPU time.
Jan 10 11:59:01 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1aa56428a8092378a7103d82e457fcf5c2c10a6f3d57e217a9ba9f03bf3a0a1b-merged.mount: Deactivated successfully.
Jan 10 11:59:01 np0005580781 podman[91474]: 2026-01-10 16:59:01.162067543 +0000 UTC m=+1.168195258 container remove b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:01 np0005580781 systemd[1]: libpod-conmon-b08fe1aa56bd93028a01aecd5630730dbf9e7cea47a26ad14548ce09816552ab.scope: Deactivated successfully.
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:01 np0005580781 python3[91618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:01 np0005580781 podman[91648]: 2026-01-10 16:59:01.340438154 +0000 UTC m=+0.044925407 container create 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 11:59:01 np0005580781 systemd[1]: Started libpod-conmon-4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9.scope.
Jan 10 11:59:01 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:01 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e939296fa3d608fd68f5cb14a358468826e4f0c4b78d247b50545fd9647e16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:01 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e939296fa3d608fd68f5cb14a358468826e4f0c4b78d247b50545fd9647e16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:01 np0005580781 podman[91648]: 2026-01-10 16:59:01.401163408 +0000 UTC m=+0.105650681 container init 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:01 np0005580781 podman[91648]: 2026-01-10 16:59:01.407987256 +0000 UTC m=+0.112474509 container start 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 11:59:01 np0005580781 podman[91648]: 2026-01-10 16:59:01.410648472 +0000 UTC m=+0.115135725 container attach 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:01 np0005580781 podman[91648]: 2026-01-10 16:59:01.323319651 +0000 UTC m=+0.027806924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2629164319' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:01 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 10 11:59:01 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 10 11:59:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 10 11:59:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 10 11:59:02 np0005580781 cranky_black[91665]: enabled application 'rbd' on pool 'backups'
Jan 10 11:59:02 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 10 11:59:02 np0005580781 systemd[1]: libpod-4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9.scope: Deactivated successfully.
Jan 10 11:59:02 np0005580781 podman[91648]: 2026-01-10 16:59:02.26005729 +0000 UTC m=+0.964544543 container died 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:02 np0005580781 systemd[1]: var-lib-containers-storage-overlay-51e939296fa3d608fd68f5cb14a358468826e4f0c4b78d247b50545fd9647e16-merged.mount: Deactivated successfully.
Jan 10 11:59:02 np0005580781 podman[91648]: 2026-01-10 16:59:02.295324888 +0000 UTC m=+0.999812141 container remove 4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9 (image=quay.io/ceph/ceph:v20, name=cranky_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 11:59:02 np0005580781 systemd[1]: libpod-conmon-4c2b4e297928b102d20224d55779588195d7e09745e54a6f4440577442e412e9.scope: Deactivated successfully.
Jan 10 11:59:02 np0005580781 python3[91726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:02 np0005580781 podman[91727]: 2026-01-10 16:59:02.650671423 +0000 UTC m=+0.042713404 container create 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 11:59:02 np0005580781 systemd[1]: Started libpod-conmon-114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3.scope.
Jan 10 11:59:02 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea3be0a0e802c94ce5ae82fbee91f7c8cff4fcb8022269b42808074b725d47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea3be0a0e802c94ce5ae82fbee91f7c8cff4fcb8022269b42808074b725d47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:02 np0005580781 podman[91727]: 2026-01-10 16:59:02.631137279 +0000 UTC m=+0.023179060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:02 np0005580781 podman[91727]: 2026-01-10 16:59:02.729160691 +0000 UTC m=+0.121202472 container init 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:02 np0005580781 podman[91727]: 2026-01-10 16:59:02.734990969 +0000 UTC m=+0.127032730 container start 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:02 np0005580781 podman[91727]: 2026-01-10 16:59:02.738719957 +0000 UTC m=+0.130761778 container attach 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:02 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/3862856904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 10 11:59:03 np0005580781 laughing_swartz[91743]: enabled application 'rbd' on pool 'images'
Jan 10 11:59:03 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 10 11:59:03 np0005580781 systemd[1]: libpod-114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3.scope: Deactivated successfully.
Jan 10 11:59:03 np0005580781 podman[91727]: 2026-01-10 16:59:03.951584813 +0000 UTC m=+1.343626574 container died 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 11:59:03 np0005580781 systemd[1]: var-lib-containers-storage-overlay-69ea3be0a0e802c94ce5ae82fbee91f7c8cff4fcb8022269b42808074b725d47-merged.mount: Deactivated successfully.
Jan 10 11:59:03 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:03 np0005580781 podman[91727]: 2026-01-10 16:59:03.994666128 +0000 UTC m=+1.386707879 container remove 114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3 (image=quay.io/ceph/ceph:v20, name=laughing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:04 np0005580781 systemd[1]: libpod-conmon-114882a091683fb8c4f95a657ee0c496ce3154895cfd3562928fb0305dfcf4a3.scope: Deactivated successfully.
Jan 10 11:59:04 np0005580781 python3[91804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:04 np0005580781 podman[91805]: 2026-01-10 16:59:04.337040768 +0000 UTC m=+0.042811747 container create c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:04 np0005580781 systemd[1]: Started libpod-conmon-c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047.scope.
Jan 10 11:59:04 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:04 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368d0203b29f2d0665b030fd1939d9f56767d760638851a0a75323c4cb05cee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:04 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f368d0203b29f2d0665b030fd1939d9f56767d760638851a0a75323c4cb05cee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:04 np0005580781 podman[91805]: 2026-01-10 16:59:04.411627743 +0000 UTC m=+0.117398752 container init c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:04 np0005580781 podman[91805]: 2026-01-10 16:59:04.316862235 +0000 UTC m=+0.022633234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:04 np0005580781 podman[91805]: 2026-01-10 16:59:04.417791091 +0000 UTC m=+0.123562070 container start c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 10 11:59:04 np0005580781 podman[91805]: 2026-01-10 16:59:04.420934742 +0000 UTC m=+0.126705721 container attach c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/151479732' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 10 11:59:04 np0005580781 gifted_mahavira[91820]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 10 11:59:04 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 10 11:59:04 np0005580781 systemd[1]: libpod-c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047.scope: Deactivated successfully.
Jan 10 11:59:04 np0005580781 podman[91805]: 2026-01-10 16:59:04.949259713 +0000 UTC m=+0.655030692 container died c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:04 np0005580781 systemd[1]: var-lib-containers-storage-overlay-f368d0203b29f2d0665b030fd1939d9f56767d760638851a0a75323c4cb05cee-merged.mount: Deactivated successfully.
Jan 10 11:59:05 np0005580781 podman[91805]: 2026-01-10 16:59:05.001637146 +0000 UTC m=+0.707408125 container remove c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047 (image=quay.io/ceph/ceph:v20, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:05 np0005580781 systemd[1]: libpod-conmon-c79b4bd84ea9bcc18fdc9f43c40d00441b0b2e57514c547daac492d7f019b047.scope: Deactivated successfully.
Jan 10 11:59:05 np0005580781 python3[91881]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:05 np0005580781 podman[91882]: 2026-01-10 16:59:05.338979661 +0000 UTC m=+0.048740839 container create 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:05 np0005580781 systemd[1]: Started libpod-conmon-3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c.scope.
Jan 10 11:59:05 np0005580781 podman[91882]: 2026-01-10 16:59:05.318735236 +0000 UTC m=+0.028496414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:05 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa06df2b7865e9af255d26793c306c91c981a826a841924c2ce69aa778ad37c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa06df2b7865e9af255d26793c306c91c981a826a841924c2ce69aa778ad37c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:05 np0005580781 podman[91882]: 2026-01-10 16:59:05.430596857 +0000 UTC m=+0.140358035 container init 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:05 np0005580781 podman[91882]: 2026-01-10 16:59:05.438804114 +0000 UTC m=+0.148565272 container start 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 11:59:05 np0005580781 podman[91882]: 2026-01-10 16:59:05.442835411 +0000 UTC m=+0.152596599 container attach 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:05 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 10 11:59:05 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 10 11:59:05 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:05 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 10 11:59:06 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/1944505131' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 10 11:59:06 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 10 11:59:06 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 10 11:59:06 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 10 11:59:06 np0005580781 confident_mcnulty[91897]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 10 11:59:06 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 10 11:59:06 np0005580781 systemd[1]: libpod-3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c.scope: Deactivated successfully.
Jan 10 11:59:06 np0005580781 podman[91882]: 2026-01-10 16:59:06.132333678 +0000 UTC m=+0.842094876 container died 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:06 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ffa06df2b7865e9af255d26793c306c91c981a826a841924c2ce69aa778ad37c-merged.mount: Deactivated successfully.
Jan 10 11:59:06 np0005580781 podman[91882]: 2026-01-10 16:59:06.177811082 +0000 UTC m=+0.887572240 container remove 3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c (image=quay.io/ceph/ceph:v20, name=confident_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 11:59:06 np0005580781 systemd[1]: libpod-conmon-3509f7e69c79d717bc34ecb60ac957e8196b2358bfb92e5ef862d27b424f587c.scope: Deactivated successfully.
Jan 10 11:59:07 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2673783167' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 10 11:59:07 np0005580781 python3[92010]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:59:07 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:08 np0005580781 python3[92081]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064347.6181085-36623-166873741844071/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:59:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:08 np0005580781 python3[92131]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:08 np0005580781 podman[92132]: 2026-01-10 16:59:08.848725527 +0000 UTC m=+0.048250824 container create 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 11:59:08 np0005580781 systemd[1]: Started libpod-conmon-987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97.scope.
Jan 10 11:59:08 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:08 np0005580781 podman[92132]: 2026-01-10 16:59:08.82665105 +0000 UTC m=+0.026176147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:08 np0005580781 podman[92132]: 2026-01-10 16:59:08.945343968 +0000 UTC m=+0.144869095 container init 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:08 np0005580781 podman[92132]: 2026-01-10 16:59:08.952730592 +0000 UTC m=+0.152255689 container start 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 11:59:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:59:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:59:08 np0005580781 podman[92132]: 2026-01-10 16:59:08.957079837 +0000 UTC m=+0.156604924 container attach 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:59:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:59:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:59:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:59:09 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14230 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:59:09 np0005580781 ceph-mgr[75538]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 10 11:59:09 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0[75245]: 2026-01-10T16:59:09.517+0000 7f8fff71e640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e2 new map
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-10T16:59:09:517838+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-10T16:59:09.517425+0000#012modified#0112026-01-10T16:59:09.517425+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 10 11:59:09 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 10 11:59:09 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 11:59:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:09 np0005580781 ceph-mgr[75538]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 10 11:59:09 np0005580781 systemd[1]: libpod-987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97.scope: Deactivated successfully.
Jan 10 11:59:09 np0005580781 podman[92132]: 2026-01-10 16:59:09.580031053 +0000 UTC m=+0.779556160 container died 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 11:59:09 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b343ef95b1113f107ad78f622b0f6854e2e08d39c24ef97304d3b2b0f616ecef-merged.mount: Deactivated successfully.
Jan 10 11:59:09 np0005580781 podman[92132]: 2026-01-10 16:59:09.628534384 +0000 UTC m=+0.828059511 container remove 987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97 (image=quay.io/ceph/ceph:v20, name=crazy_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 10 11:59:09 np0005580781 systemd[1]: libpod-conmon-987e7174c63d7c7f199fff120341784f59e567dce4c6ff8d5f304883bb864a97.scope: Deactivated successfully.
Jan 10 11:59:09 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:09 np0005580781 python3[92259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:10 np0005580781 podman[92277]: 2026-01-10 16:59:10.049661639 +0000 UTC m=+0.050209241 container create 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 11:59:10 np0005580781 systemd[1]: Started libpod-conmon-51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f.scope.
Jan 10 11:59:10 np0005580781 podman[92277]: 2026-01-10 16:59:10.029305991 +0000 UTC m=+0.029853613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:10 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:10 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:10 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:10 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:10 np0005580781 podman[92277]: 2026-01-10 16:59:10.152557542 +0000 UTC m=+0.153105184 container init 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:10 np0005580781 podman[92277]: 2026-01-10 16:59:10.158065191 +0000 UTC m=+0.158612793 container start 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:10 np0005580781 podman[92277]: 2026-01-10 16:59:10.169752328 +0000 UTC m=+0.170299980 container attach 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:10 np0005580781 podman[92320]: 2026-01-10 16:59:10.181499958 +0000 UTC m=+0.062537188 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:10 np0005580781 podman[92320]: 2026-01-10 16:59:10.307340453 +0000 UTC m=+0.188377673 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 11:59:10 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14232 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 11:59:10 np0005580781 ceph-mgr[75538]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 10 11:59:10 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:10 np0005580781 cranky_cohen[92316]: Scheduled mds.cephfs update...
Jan 10 11:59:10 np0005580781 systemd[1]: libpod-51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f.scope: Deactivated successfully.
Jan 10 11:59:10 np0005580781 podman[92277]: 2026-01-10 16:59:10.626291647 +0000 UTC m=+0.626839289 container died 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:10 np0005580781 systemd[1]: var-lib-containers-storage-overlay-cb04ba13323bafbeda971b62cf9d11ff279098be3d7fbb472f5dd950154599e8-merged.mount: Deactivated successfully.
Jan 10 11:59:10 np0005580781 podman[92277]: 2026-01-10 16:59:10.677874727 +0000 UTC m=+0.678422319 container remove 51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f (image=quay.io/ceph/ceph:v20, name=cranky_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 11:59:10 np0005580781 systemd[1]: libpod-conmon-51d7503ede518f65a16b63e0db33404803127328dbd88344fef94678880a2a3f.scope: Deactivated successfully.
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: Saving service mds.cephfs spec with placement compute-0
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:11 np0005580781 python3[92646]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:59:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:59:11 np0005580781 python3[92759]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064351.1436458-36655-105246331815915/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=7cc641ddc3c198361b04b7e13e353930d285d63f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 11:59:11 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:12 np0005580781 podman[92822]: 2026-01-10 16:59:12.03463918 +0000 UTC m=+0.057657376 container create 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 11:59:12 np0005580781 systemd[1]: Started libpod-conmon-9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4.scope.
Jan 10 11:59:12 np0005580781 podman[92822]: 2026-01-10 16:59:12.01040953 +0000 UTC m=+0.033427796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:12 np0005580781 podman[92822]: 2026-01-10 16:59:12.120959933 +0000 UTC m=+0.143978139 container init 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:12 np0005580781 podman[92822]: 2026-01-10 16:59:12.128190722 +0000 UTC m=+0.151208908 container start 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 11:59:12 np0005580781 podman[92822]: 2026-01-10 16:59:12.132030193 +0000 UTC m=+0.155048379 container attach 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:12 np0005580781 jolly_fermi[92839]: 167 167
Jan 10 11:59:12 np0005580781 systemd[1]: libpod-9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4.scope: Deactivated successfully.
Jan 10 11:59:12 np0005580781 podman[92822]: 2026-01-10 16:59:12.136357448 +0000 UTC m=+0.159375664 container died 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:12 np0005580781 ceph-mon[75249]: Saving service mds.cephfs spec with placement compute-0
Jan 10 11:59:12 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:59:12 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:12 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:59:12 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e879dee90e1bc60595af05e9a28d303f828e7cbe7e8cac825b3b39b3e591c31c-merged.mount: Deactivated successfully.
Jan 10 11:59:12 np0005580781 podman[92822]: 2026-01-10 16:59:12.192063827 +0000 UTC m=+0.215082043 container remove 9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_fermi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:12 np0005580781 systemd[1]: libpod-conmon-9be9c89956cf9e3bf60700b6dd0b3247dd428d55d42e02f6830cb6be36b482b4.scope: Deactivated successfully.
Jan 10 11:59:12 np0005580781 python3[92876]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:12 np0005580781 podman[92889]: 2026-01-10 16:59:12.381082707 +0000 UTC m=+0.049698577 container create e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 11:59:12 np0005580781 podman[92888]: 2026-01-10 16:59:12.387021419 +0000 UTC m=+0.058086429 container create eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 11:59:12 np0005580781 systemd[1]: Started libpod-conmon-eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee.scope.
Jan 10 11:59:12 np0005580781 systemd[1]: Started libpod-conmon-e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658.scope.
Jan 10 11:59:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d152f03b0e145b4be6b0c737653456611f2f193035b579c630eda5a2848101f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d152f03b0e145b4be6b0c737653456611f2f193035b579c630eda5a2848101f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:12 np0005580781 podman[92888]: 2026-01-10 16:59:12.363434887 +0000 UTC m=+0.034499997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:12 np0005580781 podman[92889]: 2026-01-10 16:59:12.361018427 +0000 UTC m=+0.029634327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:12 np0005580781 podman[92888]: 2026-01-10 16:59:12.464735214 +0000 UTC m=+0.135800254 container init eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 11:59:12 np0005580781 podman[92889]: 2026-01-10 16:59:12.475529985 +0000 UTC m=+0.144145925 container init e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 11:59:12 np0005580781 podman[92888]: 2026-01-10 16:59:12.476295457 +0000 UTC m=+0.147360467 container start eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 11:59:12 np0005580781 podman[92889]: 2026-01-10 16:59:12.480801928 +0000 UTC m=+0.149417818 container start e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 11:59:12 np0005580781 podman[92888]: 2026-01-10 16:59:12.482144146 +0000 UTC m=+0.153209166 container attach eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:12 np0005580781 podman[92889]: 2026-01-10 16:59:12.486597785 +0000 UTC m=+0.155213855 container attach e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:12 np0005580781 infallible_tharp[92919]: --> passed data devices: 0 physical, 3 LVM
Jan 10 11:59:12 np0005580781 infallible_tharp[92919]: --> All data devices are unavailable
Jan 10 11:59:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 10 11:59:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 10 11:59:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 10 11:59:13 np0005580781 systemd[1]: libpod-e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658.scope: Deactivated successfully.
Jan 10 11:59:13 np0005580781 podman[92889]: 2026-01-10 16:59:13.008271555 +0000 UTC m=+0.676887425 container died e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 11:59:13 np0005580781 systemd[1]: libpod-eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee.scope: Deactivated successfully.
Jan 10 11:59:13 np0005580781 podman[92888]: 2026-01-10 16:59:13.018984094 +0000 UTC m=+0.690049114 container died eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1d152f03b0e145b4be6b0c737653456611f2f193035b579c630eda5a2848101f-merged.mount: Deactivated successfully.
Jan 10 11:59:13 np0005580781 podman[92889]: 2026-01-10 16:59:13.057041944 +0000 UTC m=+0.725657824 container remove e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658 (image=quay.io/ceph/ceph:v20, name=elegant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:13 np0005580781 systemd[1]: libpod-conmon-e33f50414f6c8c0804b31c8aa90fd80dfb83857c886a4c8be88ea2cd3523c658.scope: Deactivated successfully.
Jan 10 11:59:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-8cbfc0578d51b0cdec5f6eb77f42fa32e916ddfee9f1285e0f134465708f59b6-merged.mount: Deactivated successfully.
Jan 10 11:59:13 np0005580781 podman[92888]: 2026-01-10 16:59:13.109112858 +0000 UTC m=+0.780177868 container remove eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 11:59:13 np0005580781 systemd[1]: libpod-conmon-eaf768db8ad85cf03d2a6557ffb49a9d828d9a030bc51b79ca52cd5238e1e3ee.scope: Deactivated successfully.
Jan 10 11:59:13 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 10 11:59:13 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/2223794276' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 10 11:59:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:13 np0005580781 podman[93051]: 2026-01-10 16:59:13.623663972 +0000 UTC m=+0.060729635 container create 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 11:59:13 np0005580781 systemd[1]: Started libpod-conmon-5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae.scope.
Jan 10 11:59:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:13 np0005580781 podman[93051]: 2026-01-10 16:59:13.601822121 +0000 UTC m=+0.038887804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:13 np0005580781 podman[93051]: 2026-01-10 16:59:13.703194449 +0000 UTC m=+0.140260132 container init 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 11:59:13 np0005580781 podman[93051]: 2026-01-10 16:59:13.714315081 +0000 UTC m=+0.151380744 container start 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:13 np0005580781 podman[93051]: 2026-01-10 16:59:13.7177665 +0000 UTC m=+0.154832163 container attach 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:13 np0005580781 happy_almeida[93081]: 167 167
Jan 10 11:59:13 np0005580781 systemd[1]: libpod-5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae.scope: Deactivated successfully.
Jan 10 11:59:13 np0005580781 podman[93051]: 2026-01-10 16:59:13.721185459 +0000 UTC m=+0.158251122 container died 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 11:59:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e9033654c66f3db4adb68a8adb105f60425485e364f6d1e399a8b3ab4f2cdad7-merged.mount: Deactivated successfully.
Jan 10 11:59:13 np0005580781 podman[93051]: 2026-01-10 16:59:13.770102552 +0000 UTC m=+0.207168225 container remove 5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:13 np0005580781 systemd[1]: libpod-conmon-5c2dfeae84285543a2eb78035a4191c46f27d74fe03a2c59f550a3c49e3960ae.scope: Deactivated successfully.
Jan 10 11:59:13 np0005580781 python3[93097]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:13 np0005580781 podman[93114]: 2026-01-10 16:59:13.966777094 +0000 UTC m=+0.059351586 container create 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:13 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:13 np0005580781 podman[93120]: 2026-01-10 16:59:13.994112973 +0000 UTC m=+0.073011390 container create 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 11:59:14 np0005580781 systemd[1]: Started libpod-conmon-06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13.scope.
Jan 10 11:59:14 np0005580781 systemd[1]: Started libpod-conmon-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope.
Jan 10 11:59:14 np0005580781 podman[93114]: 2026-01-10 16:59:13.941844723 +0000 UTC m=+0.034419295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:14 np0005580781 podman[93120]: 2026-01-10 16:59:13.954967772 +0000 UTC m=+0.033866239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:14 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:14 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb283939facb7ecf42c015b48b1b4be06879ec013f4e40bf61c66cc5859a49e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb283939facb7ecf42c015b48b1b4be06879ec013f4e40bf61c66cc5859a49e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:14 np0005580781 podman[93120]: 2026-01-10 16:59:14.078924493 +0000 UTC m=+0.157822930 container init 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:14 np0005580781 podman[93114]: 2026-01-10 16:59:14.083785984 +0000 UTC m=+0.176360486 container init 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 11:59:14 np0005580781 podman[93120]: 2026-01-10 16:59:14.093798843 +0000 UTC m=+0.172697280 container start 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:14 np0005580781 podman[93120]: 2026-01-10 16:59:14.097944863 +0000 UTC m=+0.176843290 container attach 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 11:59:14 np0005580781 podman[93114]: 2026-01-10 16:59:14.098344624 +0000 UTC m=+0.190919106 container start 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 11:59:14 np0005580781 podman[93114]: 2026-01-10 16:59:14.10271214 +0000 UTC m=+0.195286652 container attach 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]: {
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:    "0": [
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:        {
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "devices": [
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "/dev/loop3"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            ],
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_name": "ceph_lv0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_size": "21470642176",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "name": "ceph_lv0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "tags": {
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.crush_device_class": "",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.encrypted": "0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osd_id": "0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.type": "block",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.vdo": "0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.with_tpm": "0"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            },
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "type": "block",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "vg_name": "ceph_vg0"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:        }
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:    ],
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:    "1": [
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:        {
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "devices": [
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "/dev/loop4"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            ],
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_name": "ceph_lv1",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_size": "21470642176",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "name": "ceph_lv1",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "tags": {
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.crush_device_class": "",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.encrypted": "0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osd_id": "1",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.type": "block",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.vdo": "0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.with_tpm": "0"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            },
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "type": "block",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "vg_name": "ceph_vg1"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:        }
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:    ],
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:    "2": [
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:        {
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "devices": [
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "/dev/loop5"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            ],
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_name": "ceph_lv2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_size": "21470642176",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "name": "ceph_lv2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "tags": {
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.crush_device_class": "",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.encrypted": "0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osd_id": "2",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.type": "block",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.vdo": "0",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:                "ceph.with_tpm": "0"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            },
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "type": "block",
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:            "vg_name": "ceph_vg2"
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:        }
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]:    ]
Jan 10 11:59:14 np0005580781 agitated_kilby[93153]: }
Jan 10 11:59:14 np0005580781 systemd[1]: libpod-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope: Deactivated successfully.
Jan 10 11:59:14 np0005580781 conmon[93153]: conmon 5811b3b9d3a41c829367 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope/container/memory.events
Jan 10 11:59:14 np0005580781 podman[93120]: 2026-01-10 16:59:14.436166753 +0000 UTC m=+0.515065170 container died 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:14 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c34f7c90e13cb7d68b36c6c6c7d9c25659953278207c49a5cb0259d0d1213ec4-merged.mount: Deactivated successfully.
Jan 10 11:59:14 np0005580781 podman[93120]: 2026-01-10 16:59:14.488848145 +0000 UTC m=+0.567746562 container remove 5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 11:59:14 np0005580781 systemd[1]: libpod-conmon-5811b3b9d3a41c829367215c1bc5a4a6923096f0358266e4bf60cdcbf42e301d.scope: Deactivated successfully.
Jan 10 11:59:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 11:59:14 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192702526' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 11:59:14 np0005580781 nifty_ellis[93151]: 
Jan 10 11:59:14 np0005580781 nifty_ellis[93151]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":116,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1768064329,"num_in_osds":3,"osd_in_since":1768064301,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83918848,"bytes_avail":64328007680,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-10T16:59:09:517838+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-10T16:58:41.970835+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 10 11:59:14 np0005580781 systemd[1]: libpod-06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13.scope: Deactivated successfully.
Jan 10 11:59:14 np0005580781 podman[93114]: 2026-01-10 16:59:14.656100786 +0000 UTC m=+0.748675268 container died 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 11:59:14 np0005580781 systemd[1]: var-lib-containers-storage-overlay-eb283939facb7ecf42c015b48b1b4be06879ec013f4e40bf61c66cc5859a49e1-merged.mount: Deactivated successfully.
Jan 10 11:59:14 np0005580781 podman[93114]: 2026-01-10 16:59:14.707432219 +0000 UTC m=+0.800006691 container remove 06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13 (image=quay.io/ceph/ceph:v20, name=nifty_ellis, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 11:59:14 np0005580781 systemd[1]: libpod-conmon-06e1a649be2dce06b3ac0e51f2a968696bccddc10d00d055bb532db12b1e7f13.scope: Deactivated successfully.
Jan 10 11:59:15 np0005580781 podman[93296]: 2026-01-10 16:59:15.021990036 +0000 UTC m=+0.052598230 container create a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:15 np0005580781 python3[93282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:15 np0005580781 systemd[1]: Started libpod-conmon-a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3.scope.
Jan 10 11:59:15 np0005580781 podman[93296]: 2026-01-10 16:59:14.998094686 +0000 UTC m=+0.028702930 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:15 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:15 np0005580781 podman[93313]: 2026-01-10 16:59:15.113219571 +0000 UTC m=+0.049701646 container create b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 11:59:15 np0005580781 podman[93296]: 2026-01-10 16:59:15.122034726 +0000 UTC m=+0.152642970 container init a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 11:59:15 np0005580781 podman[93296]: 2026-01-10 16:59:15.130552162 +0000 UTC m=+0.161160326 container start a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 11:59:15 np0005580781 podman[93296]: 2026-01-10 16:59:15.134027123 +0000 UTC m=+0.164635327 container attach a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 11:59:15 np0005580781 vigilant_kare[93319]: 167 167
Jan 10 11:59:15 np0005580781 systemd[1]: libpod-a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3.scope: Deactivated successfully.
Jan 10 11:59:15 np0005580781 podman[93296]: 2026-01-10 16:59:15.137535534 +0000 UTC m=+0.168143708 container died a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:15 np0005580781 systemd[1]: Started libpod-conmon-b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59.scope.
Jan 10 11:59:15 np0005580781 systemd[1]: var-lib-containers-storage-overlay-cb92743d761508165f012064c41babd56eb85ca0971c0799c18ebd49dd1ac05f-merged.mount: Deactivated successfully.
Jan 10 11:59:15 np0005580781 podman[93296]: 2026-01-10 16:59:15.177977412 +0000 UTC m=+0.208585586 container remove a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 11:59:15 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:15 np0005580781 podman[93313]: 2026-01-10 16:59:15.091879725 +0000 UTC m=+0.028361830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ca54eefc3434571b8510d6578b8d16c001bb8532894a94ba1b9e875d91c57b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ca54eefc3434571b8510d6578b8d16c001bb8532894a94ba1b9e875d91c57b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:15 np0005580781 systemd[1]: libpod-conmon-a52dfb9871dda90395d3ba5d362c7de21b9597fb5b27d05d45b9a39106f904e3.scope: Deactivated successfully.
Jan 10 11:59:15 np0005580781 podman[93313]: 2026-01-10 16:59:15.208102062 +0000 UTC m=+0.144584187 container init b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:15 np0005580781 podman[93313]: 2026-01-10 16:59:15.215321731 +0000 UTC m=+0.151803806 container start b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:15 np0005580781 podman[93313]: 2026-01-10 16:59:15.2187603 +0000 UTC m=+0.155242385 container attach b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:15 np0005580781 podman[93354]: 2026-01-10 16:59:15.34303856 +0000 UTC m=+0.046504194 container create 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 11:59:15 np0005580781 systemd[1]: Started libpod-conmon-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope.
Jan 10 11:59:15 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:15 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:15 np0005580781 podman[93354]: 2026-01-10 16:59:15.323547617 +0000 UTC m=+0.027013281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:15 np0005580781 podman[93354]: 2026-01-10 16:59:15.434850863 +0000 UTC m=+0.138316527 container init 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:15 np0005580781 podman[93354]: 2026-01-10 16:59:15.455014445 +0000 UTC m=+0.158480079 container start 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:15 np0005580781 podman[93354]: 2026-01-10 16:59:15.45899754 +0000 UTC m=+0.162463214 container attach 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 11:59:15 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 11:59:15 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287799242' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 11:59:15 np0005580781 sharp_hellman[93340]: 
Jan 10 11:59:15 np0005580781 sharp_hellman[93340]: {"epoch":1,"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","modified":"2026-01-10T16:57:13.592121Z","created":"2026-01-10T16:57:13.592121Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 10 11:59:15 np0005580781 sharp_hellman[93340]: dumped monmap epoch 1
Jan 10 11:59:15 np0005580781 systemd[1]: libpod-b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59.scope: Deactivated successfully.
Jan 10 11:59:15 np0005580781 podman[93313]: 2026-01-10 16:59:15.843900448 +0000 UTC m=+0.780382533 container died b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 11:59:15 np0005580781 systemd[1]: var-lib-containers-storage-overlay-86ca54eefc3434571b8510d6578b8d16c001bb8532894a94ba1b9e875d91c57b-merged.mount: Deactivated successfully.
Jan 10 11:59:15 np0005580781 podman[93313]: 2026-01-10 16:59:15.885365416 +0000 UTC m=+0.821847491 container remove b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59 (image=quay.io/ceph/ceph:v20, name=sharp_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 11:59:15 np0005580781 systemd[1]: libpod-conmon-b1eda6c60ef4c876cbf3a27fa251f01fcd87b7ef0e02467704e8cbc868e42c59.scope: Deactivated successfully.
Jan 10 11:59:15 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:16 np0005580781 lvm[93485]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:59:16 np0005580781 lvm[93485]: VG ceph_vg1 finished
Jan 10 11:59:16 np0005580781 lvm[93484]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:59:16 np0005580781 lvm[93484]: VG ceph_vg0 finished
Jan 10 11:59:16 np0005580781 lvm[93495]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:59:16 np0005580781 lvm[93495]: VG ceph_vg2 finished
Jan 10 11:59:16 np0005580781 determined_wilson[93390]: {}
Jan 10 11:59:16 np0005580781 systemd[1]: libpod-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope: Deactivated successfully.
Jan 10 11:59:16 np0005580781 systemd[1]: libpod-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope: Consumed 1.475s CPU time.
Jan 10 11:59:16 np0005580781 podman[93354]: 2026-01-10 16:59:16.395347878 +0000 UTC m=+1.098813542 container died 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 11:59:16 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3ab4abd1ed44f29b8a1ec51024b5a87bdd2f7fdf14e434a49106fd1e7fdf1d0c-merged.mount: Deactivated successfully.
Jan 10 11:59:16 np0005580781 python3[93514]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:16 np0005580781 podman[93354]: 2026-01-10 16:59:16.453839647 +0000 UTC m=+1.157305281 container remove 6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 11:59:16 np0005580781 systemd[1]: libpod-conmon-6fc26661f23f7559a6333506820fc7706874b4357448b2be8ea66daa03842a4d.scope: Deactivated successfully.
Jan 10 11:59:16 np0005580781 podman[93528]: 2026-01-10 16:59:16.500127335 +0000 UTC m=+0.045888267 container create 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:16 np0005580781 systemd[1]: Started libpod-conmon-4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd.scope.
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:16 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev 5176c768-cabd-4d63-b825-44d378ef605b (Updating mds.cephfs deployment (+1 -> 1))
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 10 11:59:16 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 10 11:59:16 np0005580781 podman[93528]: 2026-01-10 16:59:16.482354561 +0000 UTC m=+0.028115513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:59:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:59:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1daadcaf547574d868e0a71a74fea7c6be1d9caa8d3b67620d4d83cc986aba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:16 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1daadcaf547574d868e0a71a74fea7c6be1d9caa8d3b67620d4d83cc986aba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:16 np0005580781 ceph-mgr[75538]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.anmivh on compute-0
Jan 10 11:59:16 np0005580781 ceph-mgr[75538]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.anmivh on compute-0
Jan 10 11:59:16 np0005580781 podman[93528]: 2026-01-10 16:59:16.597386784 +0000 UTC m=+0.143147766 container init 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 11:59:16 np0005580781 podman[93528]: 2026-01-10 16:59:16.605870219 +0000 UTC m=+0.151631161 container start 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:59:16 np0005580781 podman[93528]: 2026-01-10 16:59:16.609856504 +0000 UTC m=+0.155617436 container attach 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 10 11:59:17 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/284800316' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 10 11:59:17 np0005580781 tender_roentgen[93543]: [client.openstack]
Jan 10 11:59:17 np0005580781 tender_roentgen[93543]: #011key = AQC7hGJpAAAAABAAX18vjtSqzsniwZc0Ni8AQg==
Jan 10 11:59:17 np0005580781 tender_roentgen[93543]: #011caps mgr = "allow *"
Jan 10 11:59:17 np0005580781 tender_roentgen[93543]: #011caps mon = "profile rbd"
Jan 10 11:59:17 np0005580781 tender_roentgen[93543]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 10 11:59:17 np0005580781 systemd[1]: libpod-4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd.scope: Deactivated successfully.
Jan 10 11:59:17 np0005580781 podman[93528]: 2026-01-10 16:59:17.175187235 +0000 UTC m=+0.720948167 container died 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:59:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 10 11:59:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anmivh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 10 11:59:17 np0005580781 ceph-mon[75249]: from='client.? 192.168.122.100:0/284800316' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 10 11:59:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay-be1daadcaf547574d868e0a71a74fea7c6be1d9caa8d3b67620d4d83cc986aba-merged.mount: Deactivated successfully.
Jan 10 11:59:17 np0005580781 podman[93658]: 2026-01-10 16:59:17.212249626 +0000 UTC m=+0.052633821 container create f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:17 np0005580781 podman[93528]: 2026-01-10 16:59:17.242241732 +0000 UTC m=+0.788002654 container remove 4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd (image=quay.io/ceph/ceph:v20, name=tender_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:17 np0005580781 systemd[1]: Started libpod-conmon-f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a.scope.
Jan 10 11:59:17 np0005580781 systemd[1]: libpod-conmon-4d7e4af64c1729c147d1e1adc779fa3abae5f327e565ce9478d5550f0891fdfd.scope: Deactivated successfully.
Jan 10 11:59:17 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:17 np0005580781 podman[93658]: 2026-01-10 16:59:17.191387703 +0000 UTC m=+0.031771928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:17 np0005580781 podman[93658]: 2026-01-10 16:59:17.295310215 +0000 UTC m=+0.135694420 container init f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 11:59:17 np0005580781 podman[93658]: 2026-01-10 16:59:17.303885533 +0000 UTC m=+0.144269728 container start f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:17 np0005580781 flamboyant_nash[93687]: 167 167
Jan 10 11:59:17 np0005580781 systemd[1]: libpod-f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a.scope: Deactivated successfully.
Jan 10 11:59:17 np0005580781 podman[93658]: 2026-01-10 16:59:17.309152945 +0000 UTC m=+0.149537140 container attach f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:17 np0005580781 podman[93658]: 2026-01-10 16:59:17.310314529 +0000 UTC m=+0.150698724 container died f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay-fab251699d484795246e069e98296bd7e6aba6ecec43738b415006837a7b4b19-merged.mount: Deactivated successfully.
Jan 10 11:59:17 np0005580781 podman[93658]: 2026-01-10 16:59:17.355384081 +0000 UTC m=+0.195768276 container remove f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 11:59:17 np0005580781 systemd[1]: libpod-conmon-f34971f6cb111a07b759547b88fd2a32f5bd7389c3c3f9bbb5dfd8adc9c8cb9a.scope: Deactivated successfully.
Jan 10 11:59:17 np0005580781 systemd[1]: Reloading.
Jan 10 11:59:17 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:59:17 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:59:17 np0005580781 systemd[1]: Reloading.
Jan 10 11:59:17 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 11:59:17 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 11:59:17 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:18 np0005580781 systemd[1]: Starting Ceph mds.cephfs.compute-0.anmivh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4...
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: Deploying daemon mds.cephfs.compute-0.anmivh on compute-0
Jan 10 11:59:18 np0005580781 podman[93848]: 2026-01-10 16:59:18.266303865 +0000 UTC m=+0.040861951 container create 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8da2cd4a14b5de8cbe7dda52435dee72ae75d021602c1e094bd5c65b1f07f7d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.anmivh supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:18 np0005580781 podman[93848]: 2026-01-10 16:59:18.323164537 +0000 UTC m=+0.097722643 container init 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:18 np0005580781 podman[93848]: 2026-01-10 16:59:18.333258929 +0000 UTC m=+0.107817025 container start 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 11:59:18 np0005580781 bash[93848]: 9a7a6ac388746ecb20aae5585e61fd4457540360401a8c6768e113c523c746c5
Jan 10 11:59:18 np0005580781 podman[93848]: 2026-01-10 16:59:18.249006745 +0000 UTC m=+0.023564851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:18 np0005580781 systemd[1]: Started Ceph mds.cephfs.compute-0.anmivh for a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4.
Jan 10 11:59:18 np0005580781 ceph-mds[93917]: set uid:gid to 167:167 (ceph:ceph)
Jan 10 11:59:18 np0005580781 ceph-mds[93917]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 10 11:59:18 np0005580781 ceph-mds[93917]: main not setting numa affinity
Jan 10 11:59:18 np0005580781 ceph-mds[93917]: pidfile_write: ignore empty --pid-file
Jan 10 11:59:18 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh[93891]: starting mds.cephfs.compute-0.anmivh at 
Jan 10 11:59:18 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 2 from mon.0
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:18 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev 5176c768-cabd-4d63-b825-44d378ef605b (Updating mds.cephfs deployment (+1 -> 1))
Jan 10 11:59:18 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event 5176c768-cabd-4d63-b825-44d378ef605b (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 10 11:59:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:18 np0005580781 ansible-async_wrapper.py[94058]: Invoked with j957312711211 30 /home/zuul/.ansible/tmp/ansible-tmp-1768064358.2146611-36727-219939285804744/AnsiballZ_command.py _
Jan 10 11:59:18 np0005580781 ansible-async_wrapper.py[94088]: Starting module and watcher
Jan 10 11:59:18 np0005580781 ansible-async_wrapper.py[94088]: Start watching 94089 (30)
Jan 10 11:59:18 np0005580781 ansible-async_wrapper.py[94089]: Start module (94089)
Jan 10 11:59:18 np0005580781 ansible-async_wrapper.py[94058]: Return async_wrapper task started.
Jan 10 11:59:18 np0005580781 python3[94090]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:18 np0005580781 podman[94093]: 2026-01-10 16:59:18.942573641 +0000 UTC m=+0.053327792 container create 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 11:59:18 np0005580781 systemd[1]: Started libpod-conmon-1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8.scope.
Jan 10 11:59:19 np0005580781 podman[94093]: 2026-01-10 16:59:18.916990712 +0000 UTC m=+0.027744873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:19 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:19 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a167183a7f09e4a0a3353ffe0448c0530c96119ff047fffffddcb7c9f99f22e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:19 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a167183a7f09e4a0a3353ffe0448c0530c96119ff047fffffddcb7c9f99f22e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:19 np0005580781 podman[94093]: 2026-01-10 16:59:19.04328554 +0000 UTC m=+0.154039671 container init 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:19 np0005580781 podman[94093]: 2026-01-10 16:59:19.052793565 +0000 UTC m=+0.163547706 container start 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 11:59:19 np0005580781 podman[94093]: 2026-01-10 16:59:19.057223683 +0000 UTC m=+0.167977814 container attach 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:19 np0005580781 podman[94154]: 2026-01-10 16:59:19.158189739 +0000 UTC m=+0.069264132 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 new map
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-10T16:59:19:194454+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-10T16:59:09.517425+0000#012modified#0112026-01-10T16:59:09.517425+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.anmivh{-1:14242} state up:standby seq 1 addr [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] compat {c=[1],r=[1],i=[1fff]}]
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 3 from mon.0
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Monitors have assigned me to become a standby
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] up:boot
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] as mds.0
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.anmivh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.anmivh"} v 0)
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.anmivh"} : dispatch
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e3 all = 0
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e4 new map
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-10T16:59:19:214126+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-10T16:59:09.517425+0000#012modified#0112026-01-10T16:59:19.214117+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14242}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.anmivh{0:14242} state up:creating seq 1 addr [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.anmivh=up:creating}
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 4 from mon.0
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x1
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x100
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x600
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x601
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x602
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x603
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x604
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x605
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x606
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x607
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x608
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.cache creating system inode with ino:0x609
Jan 10 11:59:19 np0005580781 ceph-mgr[75538]: [progress INFO root] Writing back 4 completed events
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:19 np0005580781 ceph-mds[93917]: mds.0.4 creating_done
Jan 10 11:59:19 np0005580781 ceph-mon[75249]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.anmivh is now active in filesystem cephfs as rank 0
Jan 10 11:59:19 np0005580781 podman[94154]: 2026-01-10 16:59:19.328183379 +0000 UTC m=+0.239257782 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:19 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 11:59:19 np0005580781 thirsty_ellis[94131]: 
Jan 10 11:59:19 np0005580781 thirsty_ellis[94131]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 11:59:19 np0005580781 systemd[1]: libpod-1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8.scope: Deactivated successfully.
Jan 10 11:59:19 np0005580781 podman[94093]: 2026-01-10 16:59:19.515097348 +0000 UTC m=+0.625851459 container died 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:19 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3a167183a7f09e4a0a3353ffe0448c0530c96119ff047fffffddcb7c9f99f22e-merged.mount: Deactivated successfully.
Jan 10 11:59:19 np0005580781 podman[94093]: 2026-01-10 16:59:19.556461453 +0000 UTC m=+0.667215564 container remove 1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8 (image=quay.io/ceph/ceph:v20, name=thirsty_ellis, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:59:19 np0005580781 systemd[1]: libpod-conmon-1191ef88f10db52b89a4f86e3f6256c4221cd4203071e1ca49c24c1b860e97d8.scope: Deactivated successfully.
Jan 10 11:59:19 np0005580781 ansible-async_wrapper.py[94089]: Module complete (94089)
Jan 10 11:59:19 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:20 np0005580781 python3[94385]: ansible-ansible.legacy.async_status Invoked with jid=j957312711211.94058 mode=status _async_dir=/root/.ansible_async
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: daemon mds.cephfs.compute-0.anmivh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: Cluster is now healthy
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: daemon mds.cephfs.compute-0.anmivh is now active in filesystem cephfs as rank 0
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e5 new map
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-10T16:59:20:234282+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-10T16:59:09.517425+0000#012modified#0112026-01-10T16:59:20.234278+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14242}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14242 members: 14242#012[mds.cephfs.compute-0.anmivh{0:14242} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 10 11:59:20 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh Updating MDS map to version 5 from mon.0
Jan 10 11:59:20 np0005580781 ceph-mds[93917]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 10 11:59:20 np0005580781 ceph-mds[93917]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 10 11:59:20 np0005580781 ceph-mds[93917]: mds.0.4 recovery_done -- successful recovery!
Jan 10 11:59:20 np0005580781 ceph-mds[93917]: mds.0.4 active_start
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3831969488,v1:192.168.122.100:6815/3831969488] up:active
Jan 10 11:59:20 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.anmivh=up:active}
Jan 10 11:59:20 np0005580781 python3[94489]: ansible-ansible.legacy.async_status Invoked with jid=j957312711211.94058 mode=cleanup _async_dir=/root/.ansible_async
Jan 10 11:59:20 np0005580781 podman[94527]: 2026-01-10 16:59:20.646968995 +0000 UTC m=+0.062403324 container create 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:20 np0005580781 systemd[1]: Started libpod-conmon-7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d.scope.
Jan 10 11:59:20 np0005580781 podman[94527]: 2026-01-10 16:59:20.615255049 +0000 UTC m=+0.030689448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:20 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:20 np0005580781 podman[94527]: 2026-01-10 16:59:20.753475672 +0000 UTC m=+0.168909981 container init 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:20 np0005580781 podman[94527]: 2026-01-10 16:59:20.761549995 +0000 UTC m=+0.176984294 container start 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:20 np0005580781 podman[94527]: 2026-01-10 16:59:20.766299932 +0000 UTC m=+0.181734241 container attach 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:20 np0005580781 brave_hawking[94543]: 167 167
Jan 10 11:59:20 np0005580781 systemd[1]: libpod-7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d.scope: Deactivated successfully.
Jan 10 11:59:20 np0005580781 podman[94527]: 2026-01-10 16:59:20.768862596 +0000 UTC m=+0.184296915 container died 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:20 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b52c936a24d24a24ee7f37b92d605a12f143502a7889767db9151ee09343c342-merged.mount: Deactivated successfully.
Jan 10 11:59:20 np0005580781 podman[94527]: 2026-01-10 16:59:20.810953542 +0000 UTC m=+0.226387841 container remove 7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hawking, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 11:59:20 np0005580781 systemd[1]: libpod-conmon-7875bb46053375f919382a6973a44f498206ea5afd3043bdbaf09b0ecf346d6d.scope: Deactivated successfully.
Jan 10 11:59:21 np0005580781 podman[94592]: 2026-01-10 16:59:21.023460951 +0000 UTC m=+0.058682686 container create e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 11:59:21 np0005580781 python3[94586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:21 np0005580781 systemd[1]: Started libpod-conmon-e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96.scope.
Jan 10 11:59:21 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:21 np0005580781 podman[94592]: 2026-01-10 16:59:20.998263543 +0000 UTC m=+0.033485308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:21 np0005580781 podman[94607]: 2026-01-10 16:59:21.099459516 +0000 UTC m=+0.047951076 container create 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 11:59:21 np0005580781 podman[94592]: 2026-01-10 16:59:21.104932675 +0000 UTC m=+0.140154450 container init e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 11:59:21 np0005580781 podman[94592]: 2026-01-10 16:59:21.115660514 +0000 UTC m=+0.150882239 container start e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:21 np0005580781 podman[94592]: 2026-01-10 16:59:21.119430583 +0000 UTC m=+0.154652318 container attach e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 11:59:21 np0005580781 systemd[1]: Started libpod-conmon-41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c.scope.
Jan 10 11:59:21 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:21 np0005580781 podman[94607]: 2026-01-10 16:59:21.075664649 +0000 UTC m=+0.024156219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721592a556aee1a06d2d603784a06a5bf911f031b779d7185e2d9c8d990d7e89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721592a556aee1a06d2d603784a06a5bf911f031b779d7185e2d9c8d990d7e89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:21 np0005580781 podman[94607]: 2026-01-10 16:59:21.192795123 +0000 UTC m=+0.141286723 container init 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:21 np0005580781 podman[94607]: 2026-01-10 16:59:21.200105424 +0000 UTC m=+0.148596974 container start 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:21 np0005580781 podman[94607]: 2026-01-10 16:59:21.205241122 +0000 UTC m=+0.153732722 container attach 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:21 np0005580781 epic_bhaskara[94620]: --> passed data devices: 0 physical, 3 LVM
Jan 10 11:59:21 np0005580781 epic_bhaskara[94620]: --> All data devices are unavailable
Jan 10 11:59:21 np0005580781 systemd[1]: libpod-e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96.scope: Deactivated successfully.
Jan 10 11:59:21 np0005580781 podman[94592]: 2026-01-10 16:59:21.690663455 +0000 UTC m=+0.725885180 container died e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:59:21 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 11:59:21 np0005580781 frosty_heisenberg[94630]: 
Jan 10 11:59:21 np0005580781 frosty_heisenberg[94630]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 10 11:59:21 np0005580781 systemd[1]: libpod-41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c.scope: Deactivated successfully.
Jan 10 11:59:21 np0005580781 systemd[1]: var-lib-containers-storage-overlay-98cdb031cd40e01658fbe8c4f3bfcefb3086c8e548cf1673540708b14c0630bd-merged.mount: Deactivated successfully.
Jan 10 11:59:21 np0005580781 podman[94592]: 2026-01-10 16:59:21.746956221 +0000 UTC m=+0.782177956 container remove e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:21 np0005580781 podman[94677]: 2026-01-10 16:59:21.758157104 +0000 UTC m=+0.029033719 container died 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:59:21 np0005580781 systemd[1]: libpod-conmon-e4bcf02da405e0bf206c3780b2522f516c20d0d917b32a7e6c1c3fd610151f96.scope: Deactivated successfully.
Jan 10 11:59:21 np0005580781 systemd[1]: var-lib-containers-storage-overlay-721592a556aee1a06d2d603784a06a5bf911f031b779d7185e2d9c8d990d7e89-merged.mount: Deactivated successfully.
Jan 10 11:59:21 np0005580781 podman[94677]: 2026-01-10 16:59:21.800958121 +0000 UTC m=+0.071834716 container remove 41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c (image=quay.io/ceph/ceph:v20, name=frosty_heisenberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:21 np0005580781 systemd[1]: libpod-conmon-41989e40d694b14e4da007a24f5b632d0309e3c74be7188da5ddb5eff5270e1c.scope: Deactivated successfully.
Jan 10 11:59:21 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:22 np0005580781 podman[94758]: 2026-01-10 16:59:22.277120146 +0000 UTC m=+0.063081733 container create 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:22 np0005580781 systemd[1]: Started libpod-conmon-18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb.scope.
Jan 10 11:59:22 np0005580781 podman[94758]: 2026-01-10 16:59:22.248789038 +0000 UTC m=+0.034750725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:22 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:22 np0005580781 podman[94758]: 2026-01-10 16:59:22.371200544 +0000 UTC m=+0.157162221 container init 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 11:59:22 np0005580781 podman[94758]: 2026-01-10 16:59:22.381867582 +0000 UTC m=+0.167829169 container start 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 11:59:22 np0005580781 podman[94758]: 2026-01-10 16:59:22.386010201 +0000 UTC m=+0.171971878 container attach 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:22 np0005580781 inspiring_neumann[94775]: 167 167
Jan 10 11:59:22 np0005580781 systemd[1]: libpod-18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb.scope: Deactivated successfully.
Jan 10 11:59:22 np0005580781 podman[94780]: 2026-01-10 16:59:22.438462167 +0000 UTC m=+0.032501360 container died 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 11:59:22 np0005580781 systemd[1]: var-lib-containers-storage-overlay-018334cfd5195fb2fd77e890c5b605c6b4b4c3a8137aeaf85550501667f144e2-merged.mount: Deactivated successfully.
Jan 10 11:59:22 np0005580781 podman[94780]: 2026-01-10 16:59:22.480417419 +0000 UTC m=+0.074456572 container remove 18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:22 np0005580781 systemd[1]: libpod-conmon-18b22629bafd8cb612114cabbf9c4f211f717f21e12157c04240c7a84f683fbb.scope: Deactivated successfully.
Jan 10 11:59:22 np0005580781 python3[94821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:22 np0005580781 podman[94827]: 2026-01-10 16:59:22.804994268 +0000 UTC m=+0.186874542 container create 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 11:59:22 np0005580781 podman[94835]: 2026-01-10 16:59:22.826505678 +0000 UTC m=+0.177626774 container create 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:22 np0005580781 systemd[1]: Started libpod-conmon-302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f.scope.
Jan 10 11:59:22 np0005580781 systemd[1]: Started libpod-conmon-5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe.scope.
Jan 10 11:59:22 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:22 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe69c2ae7d3af483529daa0109d454574cb0b82438f446906e992fa6178f4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:22 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe69c2ae7d3af483529daa0109d454574cb0b82438f446906e992fa6178f4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:22 np0005580781 podman[94827]: 2026-01-10 16:59:22.786607758 +0000 UTC m=+0.168488062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:22 np0005580781 podman[94827]: 2026-01-10 16:59:22.886443787 +0000 UTC m=+0.268324061 container init 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 11:59:22 np0005580781 podman[94835]: 2026-01-10 16:59:22.891156133 +0000 UTC m=+0.242277259 container init 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 11:59:22 np0005580781 podman[94827]: 2026-01-10 16:59:22.896040414 +0000 UTC m=+0.277920698 container start 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:22 np0005580781 podman[94835]: 2026-01-10 16:59:22.800969972 +0000 UTC m=+0.152091088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:22 np0005580781 podman[94835]: 2026-01-10 16:59:22.901944664 +0000 UTC m=+0.253065760 container start 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:22 np0005580781 podman[94827]: 2026-01-10 16:59:22.906026992 +0000 UTC m=+0.287907296 container attach 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:22 np0005580781 podman[94835]: 2026-01-10 16:59:22.915095494 +0000 UTC m=+0.266216600 container attach 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]: {
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:    "0": [
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:        {
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "devices": [
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "/dev/loop3"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            ],
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_name": "ceph_lv0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_size": "21470642176",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "name": "ceph_lv0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "tags": {
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.crush_device_class": "",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.encrypted": "0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osd_id": "0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.type": "block",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.vdo": "0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.with_tpm": "0"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            },
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "type": "block",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "vg_name": "ceph_vg0"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:        }
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:    ],
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:    "1": [
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:        {
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "devices": [
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "/dev/loop4"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            ],
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_name": "ceph_lv1",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_size": "21470642176",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "name": "ceph_lv1",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "tags": {
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.crush_device_class": "",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.encrypted": "0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osd_id": "1",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.type": "block",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.vdo": "0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.with_tpm": "0"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            },
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "type": "block",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "vg_name": "ceph_vg1"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:        }
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:    ],
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:    "2": [
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:        {
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "devices": [
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "/dev/loop5"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            ],
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_name": "ceph_lv2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_size": "21470642176",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "name": "ceph_lv2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "tags": {
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.crush_device_class": "",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.encrypted": "0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osd_id": "2",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.type": "block",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.vdo": "0",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:                "ceph.with_tpm": "0"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            },
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "type": "block",
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:            "vg_name": "ceph_vg2"
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:        }
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]:    ]
Jan 10 11:59:23 np0005580781 frosty_varahamihira[94861]: }
Jan 10 11:59:23 np0005580781 systemd[1]: libpod-302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f.scope: Deactivated successfully.
Jan 10 11:59:23 np0005580781 podman[94827]: 2026-01-10 16:59:23.181817459 +0000 UTC m=+0.563697763 container died 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 10 11:59:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:23 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3df806715b98d04a293a870ee9210dd29bef706064e8075429afa70f96ed6d66-merged.mount: Deactivated successfully.
Jan 10 11:59:23 np0005580781 podman[94827]: 2026-01-10 16:59:23.363394524 +0000 UTC m=+0.745274798 container remove 302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:23 np0005580781 systemd[1]: libpod-conmon-302f807c9405cb64e262701f3c9d4cf5514e2a2a669f156651e8995c811c309f.scope: Deactivated successfully.
Jan 10 11:59:23 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 11:59:23 np0005580781 admiring_elion[94863]: 
Jan 10 11:59:23 np0005580781 admiring_elion[94863]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Jan 10 11:59:23 np0005580781 systemd[1]: libpod-5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe.scope: Deactivated successfully.
Jan 10 11:59:23 np0005580781 podman[94835]: 2026-01-10 16:59:23.502140172 +0000 UTC m=+0.853261278 container died 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:23 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a4fe69c2ae7d3af483529daa0109d454574cb0b82438f446906e992fa6178f4f-merged.mount: Deactivated successfully.
Jan 10 11:59:23 np0005580781 podman[94835]: 2026-01-10 16:59:23.544819856 +0000 UTC m=+0.895940952 container remove 5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe (image=quay.io/ceph/ceph:v20, name=admiring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:23 np0005580781 systemd[1]: libpod-conmon-5b4444403ef53a64315d735df88c66e3c4ba966b490b71417f7d28940424c9fe.scope: Deactivated successfully.
Jan 10 11:59:23 np0005580781 ansible-async_wrapper.py[94088]: Done in kid B.
Jan 10 11:59:23 np0005580781 podman[94981]: 2026-01-10 16:59:23.832099795 +0000 UTC m=+0.051285743 container create 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:23 np0005580781 systemd[1]: Started libpod-conmon-7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657.scope.
Jan 10 11:59:23 np0005580781 podman[94981]: 2026-01-10 16:59:23.80458822 +0000 UTC m=+0.023774158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:23 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:23 np0005580781 podman[94981]: 2026-01-10 16:59:23.937091307 +0000 UTC m=+0.156277295 container init 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 11:59:23 np0005580781 podman[94981]: 2026-01-10 16:59:23.946500709 +0000 UTC m=+0.165686657 container start 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:23 np0005580781 podman[94981]: 2026-01-10 16:59:23.951331328 +0000 UTC m=+0.170517246 container attach 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:23 np0005580781 pedantic_dijkstra[94997]: 167 167
Jan 10 11:59:23 np0005580781 systemd[1]: libpod-7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657.scope: Deactivated successfully.
Jan 10 11:59:23 np0005580781 podman[94981]: 2026-01-10 16:59:23.95691482 +0000 UTC m=+0.176100728 container died 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 11:59:23 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 11:59:23 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0bed1db7b85958fd102ac1e2e6089eecf49a4985f0b153bac1ffbf59c499b8f8-merged.mount: Deactivated successfully.
Jan 10 11:59:24 np0005580781 podman[94981]: 2026-01-10 16:59:24.012490655 +0000 UTC m=+0.231676573 container remove 7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 11:59:24 np0005580781 systemd[1]: libpod-conmon-7e7ffc6e0db245c8f90dc0a4d8c43357a9927e225d3ead9698ff4c75baa8c657.scope: Deactivated successfully.
Jan 10 11:59:24 np0005580781 podman[95021]: 2026-01-10 16:59:24.224279473 +0000 UTC m=+0.100030751 container create ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:24 np0005580781 ceph-mds[93917]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 10 11:59:24 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mds-cephfs-compute-0-anmivh[93891]: 2026-01-10T16:59:24.228+0000 7f65855d3640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 10 11:59:24 np0005580781 podman[95021]: 2026-01-10 16:59:24.147104934 +0000 UTC m=+0.022856242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:24 np0005580781 systemd[1]: Started libpod-conmon-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope.
Jan 10 11:59:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:24 np0005580781 podman[95021]: 2026-01-10 16:59:24.315328333 +0000 UTC m=+0.191079731 container init ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:24 np0005580781 podman[95021]: 2026-01-10 16:59:24.324947481 +0000 UTC m=+0.200698819 container start ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 11:59:24 np0005580781 podman[95021]: 2026-01-10 16:59:24.329142532 +0000 UTC m=+0.204893860 container attach ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:24 np0005580781 python3[95067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:24 np0005580781 podman[95068]: 2026-01-10 16:59:24.561269648 +0000 UTC m=+0.050325745 container create 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:24 np0005580781 systemd[1]: Started libpod-conmon-1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae.scope.
Jan 10 11:59:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:24 np0005580781 podman[95068]: 2026-01-10 16:59:24.541013763 +0000 UTC m=+0.030069890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d90ba33becaf393201656eb3f385bb3988123d1e784180e7fc434c9f1a53bf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d90ba33becaf393201656eb3f385bb3988123d1e784180e7fc434c9f1a53bf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:24 np0005580781 podman[95068]: 2026-01-10 16:59:24.650628999 +0000 UTC m=+0.139685116 container init 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:24 np0005580781 podman[95068]: 2026-01-10 16:59:24.657276972 +0000 UTC m=+0.146333069 container start 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 11:59:24 np0005580781 podman[95068]: 2026-01-10 16:59:24.684154708 +0000 UTC m=+0.173211065 container attach 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 11:59:25 np0005580781 lvm[95177]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:59:25 np0005580781 lvm[95177]: VG ceph_vg0 finished
Jan 10 11:59:25 np0005580781 lvm[95179]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:59:25 np0005580781 lvm[95179]: VG ceph_vg1 finished
Jan 10 11:59:25 np0005580781 lvm[95181]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:59:25 np0005580781 lvm[95181]: VG ceph_vg2 finished
Jan 10 11:59:25 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 10 11:59:25 np0005580781 vigorous_dijkstra[95092]: 
Jan 10 11:59:25 np0005580781 vigorous_dijkstra[95092]: [{"container_id": "2d8e6ffc82d6", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.23%", "created": "2026-01-10T16:58:04.726457Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-10T16:58:04.801858Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139779Z", "memory_usage": 7790919, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-10T16:58:04.605554Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@crash.compute-0", "version": "20.2.0"}, {"container_id": "9a7a6ac38874", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.25%", "created": "2026-01-10T16:59:18.346552Z", "daemon_id": "cephfs.compute-0.anmivh", "daemon_name": "mds.cephfs.compute-0.anmivh", "daemon_type": "mds", "events": ["2026-01-10T16:59:18.421424Z daemon:mds.cephfs.compute-0.anmivh [INFO] \"Deployed mds.cephfs.compute-0.anmivh on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.140198Z", "memory_usage": 15330181, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-10T16:59:18.254836Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mds.cephfs.compute-0.anmivh", "version": "20.2.0"}, {"container_id": "1966a4894cf3", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "18.61%", "created": "2026-01-10T16:57:20.132238Z", "daemon_id": "compute-0.mkxlpr", "daemon_name": "mgr.compute-0.mkxlpr", "daemon_type": "mgr", "events": ["2026-01-10T16:58:10.864841Z daemon:mgr.compute-0.mkxlpr [INFO] \"Reconfigured mgr.compute-0.mkxlpr on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139647Z", "memory_usage": 547042099, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-10T16:57:20.018328Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mgr.compute-0.mkxlpr", "version": "20.2.0"}, {"container_id": "69622407e4b3", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "3.11%", "created": "2026-01-10T16:57:15.698113Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-10T16:58:10.158449Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139390Z", "memory_request": 2147483648, "memory_usage": 42498785, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-10T16:57:18.097137Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@mon.compute-0", "version": "20.2.0"}, {"container_id": "8bba0bcac67d", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.73%", "created": "2026-01-10T16:58:29.175131Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-10T16:58:29.274760Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139886Z", "memory_request": 4294967296, "memory_usage": 58174996, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-10T16:58:29.056076Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@osd.0", "version": "20.2.0"}, {"container_id": "2086bc4111bf", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "3.82%", "created": "2026-01-10T16:58:34.721291Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-10T16:58:34.848772Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.139989Z", "memory_request": 4294967296, "memory_usage": 58678312, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-10T16:58:34.481064Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@osd.1", "version": "20.2.0"}, {"container_id": "d71926618b51", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "4.06%", "created": "2026-01-10T16:58:42.340512Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-10T16:58:42.482816Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-10T16:59:20.140092Z", "memory_request": 4294967296, "memory_usage": 55710842, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-10T16:58:42.048876Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4@osd.2", "version": "20.2.0"}]
Jan 10 11:59:25 np0005580781 lvm[95183]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:59:25 np0005580781 lvm[95183]: VG ceph_vg1 finished
Jan 10 11:59:25 np0005580781 systemd[1]: libpod-1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae.scope: Deactivated successfully.
Jan 10 11:59:25 np0005580781 podman[95068]: 2026-01-10 16:59:25.155236047 +0000 UTC m=+0.644292144 container died 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 10 11:59:25 np0005580781 systemd[1]: var-lib-containers-storage-overlay-75d90ba33becaf393201656eb3f385bb3988123d1e784180e7fc434c9f1a53bf-merged.mount: Deactivated successfully.
Jan 10 11:59:25 np0005580781 podman[95068]: 2026-01-10 16:59:25.19933209 +0000 UTC m=+0.688388187 container remove 1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae (image=quay.io/ceph/ceph:v20, name=vigorous_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 11:59:25 np0005580781 hopeful_wescoff[95037]: {}
Jan 10 11:59:25 np0005580781 systemd[1]: libpod-conmon-1fb668f9345566844b7a358b39161b988fff95fcc5dd2d8c07f7fb6a282403ae.scope: Deactivated successfully.
Jan 10 11:59:25 np0005580781 systemd[1]: libpod-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope: Deactivated successfully.
Jan 10 11:59:25 np0005580781 systemd[1]: libpod-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope: Consumed 1.430s CPU time.
Jan 10 11:59:25 np0005580781 conmon[95037]: conmon ae2d4cb5369f91fb24cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope/container/memory.events
Jan 10 11:59:25 np0005580781 podman[95021]: 2026-01-10 16:59:25.252179497 +0000 UTC m=+1.127930825 container died ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 10 11:59:25 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4341065d2d006435ef2ccfeb568f9470d3d66c46feb841070805f96d9790b8fe-merged.mount: Deactivated successfully.
Jan 10 11:59:25 np0005580781 podman[95021]: 2026-01-10 16:59:25.2986675 +0000 UTC m=+1.174418788 container remove ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 11:59:25 np0005580781 systemd[1]: libpod-conmon-ae2d4cb5369f91fb24cd6c72f829c216a0977367840cb1738de74a12708c9492.scope: Deactivated successfully.
Jan 10 11:59:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:25 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 11:59:26 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:26 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:26 np0005580781 podman[95328]: 2026-01-10 16:59:26.13395929 +0000 UTC m=+0.087005715 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 11:59:26 np0005580781 podman[95328]: 2026-01-10 16:59:26.259136666 +0000 UTC m=+0.212183021 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 11:59:26 np0005580781 python3[95373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:26 np0005580781 podman[95393]: 2026-01-10 16:59:26.372626584 +0000 UTC m=+0.050958163 container create f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:26 np0005580781 systemd[1]: Started libpod-conmon-f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d.scope.
Jan 10 11:59:26 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33a62e7bd8717517d18281effd1cbbcb739b1934fd80f0e1eced6564b74da39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:26 np0005580781 podman[95393]: 2026-01-10 16:59:26.34339956 +0000 UTC m=+0.021731129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33a62e7bd8717517d18281effd1cbbcb739b1934fd80f0e1eced6564b74da39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:26 np0005580781 podman[95393]: 2026-01-10 16:59:26.463680963 +0000 UTC m=+0.142012592 container init f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:26 np0005580781 podman[95393]: 2026-01-10 16:59:26.476214345 +0000 UTC m=+0.154545934 container start f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 11:59:26 np0005580781 podman[95393]: 2026-01-10 16:59:26.480823928 +0000 UTC m=+0.159155517 container attach f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820989455' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 10 11:59:27 np0005580781 naughty_goldstine[95428]: 
Jan 10 11:59:27 np0005580781 naughty_goldstine[95428]: {"fsid":"a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":128,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1768064329,"num_in_osds":3,"osd_in_since":1768064301,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":24,"data_bytes":461710,"bytes_used":83939328,"bytes_avail":64327987200,"bytes_total":64411926528,"write_bytes_sec":1194,"read_op_per_sec":0,"write_op_per_sec":3},"fsmap":{"epoch":5,"btime":"2026-01-10T16:59:20:234282+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.anmivh","status":"up:active","gid":14242}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-10T16:58:41.970835+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 10 11:59:27 np0005580781 systemd[1]: libpod-f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d.scope: Deactivated successfully.
Jan 10 11:59:27 np0005580781 podman[95393]: 2026-01-10 16:59:27.020094207 +0000 UTC m=+0.698425756 container died f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 11:59:27 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e33a62e7bd8717517d18281effd1cbbcb739b1934fd80f0e1eced6564b74da39-merged.mount: Deactivated successfully.
Jan 10 11:59:27 np0005580781 podman[95393]: 2026-01-10 16:59:27.077456664 +0000 UTC m=+0.755788223 container remove f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d (image=quay.io/ceph/ceph:v20, name=naughty_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:27 np0005580781 systemd[1]: libpod-conmon-f0d0d88a7b3e4adf4108001c87c1d66bd987cc6d810446fb0af0bb3b3427d93d.scope: Deactivated successfully.
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 11:59:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 11:59:27 np0005580781 podman[95634]: 2026-01-10 16:59:27.648077167 +0000 UTC m=+0.053125445 container create d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 11:59:27 np0005580781 systemd[1]: Started libpod-conmon-d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632.scope.
Jan 10 11:59:27 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:27 np0005580781 podman[95634]: 2026-01-10 16:59:27.624848106 +0000 UTC m=+0.029896464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:27 np0005580781 podman[95634]: 2026-01-10 16:59:27.719068688 +0000 UTC m=+0.124117026 container init d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:27 np0005580781 podman[95634]: 2026-01-10 16:59:27.730055585 +0000 UTC m=+0.135103874 container start d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 11:59:27 np0005580781 agitated_kowalevski[95650]: 167 167
Jan 10 11:59:27 np0005580781 podman[95634]: 2026-01-10 16:59:27.735274226 +0000 UTC m=+0.140322514 container attach d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 11:59:27 np0005580781 systemd[1]: libpod-d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632.scope: Deactivated successfully.
Jan 10 11:59:27 np0005580781 podman[95634]: 2026-01-10 16:59:27.736121341 +0000 UTC m=+0.141169639 container died d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 11:59:27 np0005580781 systemd[1]: var-lib-containers-storage-overlay-11fd53482487e0862e6f2c94bb4b102b7c6a82a49faffde8fa3ea21212fd395a-merged.mount: Deactivated successfully.
Jan 10 11:59:27 np0005580781 podman[95634]: 2026-01-10 16:59:27.790867962 +0000 UTC m=+0.195916220 container remove d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 11:59:27 np0005580781 systemd[1]: libpod-conmon-d20bb39b4ce0e06fc87edcfb3bd743cb5a877ce5b8c6b7237acdbb129777d632.scope: Deactivated successfully.
Jan 10 11:59:27 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 11:59:28 np0005580781 podman[95699]: 2026-01-10 16:59:28.012777523 +0000 UTC m=+0.067265434 container create 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 11:59:28 np0005580781 python3[95693]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:28 np0005580781 systemd[1]: Started libpod-conmon-16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f.scope.
Jan 10 11:59:28 np0005580781 podman[95699]: 2026-01-10 16:59:27.980171931 +0000 UTC m=+0.034659912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:28 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:28 np0005580781 podman[95713]: 2026-01-10 16:59:28.114947764 +0000 UTC m=+0.055223256 container create 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 11:59:28 np0005580781 podman[95699]: 2026-01-10 16:59:28.1210411 +0000 UTC m=+0.175529011 container init 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:28 np0005580781 podman[95699]: 2026-01-10 16:59:28.133418298 +0000 UTC m=+0.187906199 container start 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 11:59:28 np0005580781 podman[95699]: 2026-01-10 16:59:28.137937958 +0000 UTC m=+0.192425839 container attach 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 11:59:28 np0005580781 systemd[1]: Started libpod-conmon-3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546.scope.
Jan 10 11:59:28 np0005580781 podman[95713]: 2026-01-10 16:59:28.089830809 +0000 UTC m=+0.030106381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:28 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c079d76d6c6c0fab25a745457e6541a962f731b8c4385676198d10f5224a872d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c079d76d6c6c0fab25a745457e6541a962f731b8c4385676198d10f5224a872d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:28 np0005580781 podman[95713]: 2026-01-10 16:59:28.205801869 +0000 UTC m=+0.146077381 container init 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:28 np0005580781 podman[95713]: 2026-01-10 16:59:28.212315807 +0000 UTC m=+0.152591339 container start 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 11:59:28 np0005580781 podman[95713]: 2026-01-10 16:59:28.216869408 +0000 UTC m=+0.157144920 container attach 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:28 np0005580781 epic_bassi[95721]: --> passed data devices: 0 physical, 3 LVM
Jan 10 11:59:28 np0005580781 epic_bassi[95721]: --> All data devices are unavailable
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 10 11:59:28 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80309119' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 10 11:59:28 np0005580781 festive_brahmagupta[95735]: 
Jan 10 11:59:28 np0005580781 systemd[1]: libpod-16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f.scope: Deactivated successfully.
Jan 10 11:59:28 np0005580781 podman[95699]: 2026-01-10 16:59:28.690423308 +0000 UTC m=+0.744911199 container died 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:28 np0005580781 systemd[1]: libpod-3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546.scope: Deactivated successfully.
Jan 10 11:59:28 np0005580781 festive_brahmagupta[95735]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""}]
Jan 10 11:59:28 np0005580781 podman[95713]: 2026-01-10 16:59:28.700834889 +0000 UTC m=+0.641110421 container died 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:28 np0005580781 systemd[1]: var-lib-containers-storage-overlay-53ea49a48a48f12a022b6e208ec8aeb94cd9c81886cce30e78c8edb60788067b-merged.mount: Deactivated successfully.
Jan 10 11:59:28 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c079d76d6c6c0fab25a745457e6541a962f731b8c4385676198d10f5224a872d-merged.mount: Deactivated successfully.
Jan 10 11:59:28 np0005580781 podman[95699]: 2026-01-10 16:59:28.761328926 +0000 UTC m=+0.815816807 container remove 16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_bassi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:28 np0005580781 podman[95713]: 2026-01-10 16:59:28.778744159 +0000 UTC m=+0.719019651 container remove 3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546 (image=quay.io/ceph/ceph:v20, name=festive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 11:59:28 np0005580781 systemd[1]: libpod-conmon-16579247793eddade24258d6dfc6c16032b052d11ed3968f90499d9802c75c9f.scope: Deactivated successfully.
Jan 10 11:59:28 np0005580781 systemd[1]: libpod-conmon-3b6c031b1731fbd218e87f384f437576934430966d2ba659ccc4dcac972a4546.scope: Deactivated successfully.
Jan 10 11:59:29 np0005580781 podman[95855]: 2026-01-10 16:59:29.246367188 +0000 UTC m=+0.044899788 container create dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 10 11:59:29 np0005580781 systemd[1]: Started libpod-conmon-dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced.scope.
Jan 10 11:59:29 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:29 np0005580781 podman[95855]: 2026-01-10 16:59:29.2249762 +0000 UTC m=+0.023508850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:29 np0005580781 podman[95855]: 2026-01-10 16:59:29.334339039 +0000 UTC m=+0.132871669 container init dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:29 np0005580781 podman[95855]: 2026-01-10 16:59:29.341663961 +0000 UTC m=+0.140196581 container start dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 11:59:29 np0005580781 friendly_beaver[95872]: 167 167
Jan 10 11:59:29 np0005580781 podman[95855]: 2026-01-10 16:59:29.345924884 +0000 UTC m=+0.144457484 container attach dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 11:59:29 np0005580781 systemd[1]: libpod-dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced.scope: Deactivated successfully.
Jan 10 11:59:29 np0005580781 podman[95855]: 2026-01-10 16:59:29.347576581 +0000 UTC m=+0.146109201 container died dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:29 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6735e97678c94ca9c929dda53a2bbac83854cac029df94a8b4ad6da2a649b411-merged.mount: Deactivated successfully.
Jan 10 11:59:29 np0005580781 podman[95855]: 2026-01-10 16:59:29.385811856 +0000 UTC m=+0.184344476 container remove dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 11:59:29 np0005580781 systemd[1]: libpod-conmon-dbec82f9937957a3c7a630c747491a935f191d86b2f9250db7fe69702dcf5ced.scope: Deactivated successfully.
Jan 10 11:59:29 np0005580781 podman[95896]: 2026-01-10 16:59:29.536718805 +0000 UTC m=+0.039448590 container create f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 11:59:29 np0005580781 systemd[1]: Started libpod-conmon-f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423.scope.
Jan 10 11:59:29 np0005580781 podman[95896]: 2026-01-10 16:59:29.519639842 +0000 UTC m=+0.022369647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:29 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:29 np0005580781 podman[95896]: 2026-01-10 16:59:29.654766315 +0000 UTC m=+0.157496140 container init f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 11:59:29 np0005580781 podman[95896]: 2026-01-10 16:59:29.666508935 +0000 UTC m=+0.169238730 container start f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 11:59:29 np0005580781 podman[95896]: 2026-01-10 16:59:29.670433988 +0000 UTC m=+0.173163873 container attach f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 11:59:29 np0005580781 python3[95943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:29 np0005580781 podman[95944]: 2026-01-10 16:59:29.903342526 +0000 UTC m=+0.045702021 container create b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 11:59:29 np0005580781 systemd[1]: Started libpod-conmon-b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6.scope.
Jan 10 11:59:29 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:29 np0005580781 podman[95944]: 2026-01-10 16:59:29.884457861 +0000 UTC m=+0.026817396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770913b075e53cf18751791e9d4485c9a4771ac471c1376782647ed25fad5620/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:29 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770913b075e53cf18751791e9d4485c9a4771ac471c1376782647ed25fad5620/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:29 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 11:59:29 np0005580781 podman[95944]: 2026-01-10 16:59:29.992890842 +0000 UTC m=+0.135250347 container init b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]: {
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:    "0": [
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:        {
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "devices": [
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "/dev/loop3"
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            ],
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "lv_name": "ceph_lv0",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "lv_size": "21470642176",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "name": "ceph_lv0",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:            "tags": {
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.crush_device_class": "",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.encrypted": "0",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 11:59:29 np0005580781 frosty_knuth[95913]:                "ceph.osd_id": "0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.type": "block",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.vdo": "0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.with_tpm": "0"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            },
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "type": "block",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "vg_name": "ceph_vg0"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:        }
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:    ],
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:    "1": [
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:        {
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "devices": [
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "/dev/loop4"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            ],
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_name": "ceph_lv1",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_size": "21470642176",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "name": "ceph_lv1",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "tags": {
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.crush_device_class": "",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.encrypted": "0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.osd_id": "1",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.type": "block",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.vdo": "0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.with_tpm": "0"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            },
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "type": "block",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "vg_name": "ceph_vg1"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:        }
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:    ],
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:    "2": [
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:        {
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "devices": [
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "/dev/loop5"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            ],
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_name": "ceph_lv2",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_size": "21470642176",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "name": "ceph_lv2",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "tags": {
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.cephx_lockbox_secret": "",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.cluster_name": "ceph",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.crush_device_class": "",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.encrypted": "0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.objectstore": "bluestore",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.osd_id": "2",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.type": "block",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.vdo": "0",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:                "ceph.with_tpm": "0"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            },
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "type": "block",
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:            "vg_name": "ceph_vg2"
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:        }
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]:    ]
Jan 10 11:59:30 np0005580781 frosty_knuth[95913]: }
Jan 10 11:59:30 np0005580781 podman[95944]: 2026-01-10 16:59:30.001855541 +0000 UTC m=+0.144215056 container start b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:30 np0005580781 podman[95944]: 2026-01-10 16:59:30.006090843 +0000 UTC m=+0.148450348 container attach b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:30 np0005580781 systemd[1]: libpod-f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423.scope: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[95967]: 2026-01-10 16:59:30.085684482 +0000 UTC m=+0.034083465 container died f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:30 np0005580781 systemd[1]: var-lib-containers-storage-overlay-bfd380280750ce8ca92eb5d3337bcf24a65c599b88d61d13c1d247cc304cdf7b-merged.mount: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[95967]: 2026-01-10 16:59:30.13127824 +0000 UTC m=+0.079677223 container remove f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 11:59:30 np0005580781 systemd[1]: libpod-conmon-f3a27df5ff5a588e2aa93cf9a1f48936848c79d50921a638d0d7d91361442423.scope: Deactivated successfully.
Jan 10 11:59:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 10 11:59:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007974817' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 10 11:59:30 np0005580781 vigorous_khayyam[95963]: mimic
Jan 10 11:59:30 np0005580781 systemd[1]: libpod-b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6.scope: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[95944]: 2026-01-10 16:59:30.488351474 +0000 UTC m=+0.630711059 container died b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 11:59:30 np0005580781 systemd[1]: var-lib-containers-storage-overlay-770913b075e53cf18751791e9d4485c9a4771ac471c1376782647ed25fad5620-merged.mount: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[95944]: 2026-01-10 16:59:30.54080075 +0000 UTC m=+0.683160245 container remove b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6 (image=quay.io/ceph/ceph:v20, name=vigorous_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 11:59:30 np0005580781 systemd[1]: libpod-conmon-b140f645b00e1049e3b58216adb77efdc923e995aef005c910ee31892bce93c6.scope: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[96072]: 2026-01-10 16:59:30.631373686 +0000 UTC m=+0.043306602 container create aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 11:59:30 np0005580781 systemd[1]: Started libpod-conmon-aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994.scope.
Jan 10 11:59:30 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:30 np0005580781 podman[96072]: 2026-01-10 16:59:30.684330986 +0000 UTC m=+0.096263912 container init aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 11:59:30 np0005580781 podman[96072]: 2026-01-10 16:59:30.689908117 +0000 UTC m=+0.101841023 container start aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 11:59:30 np0005580781 wonderful_lewin[96088]: 167 167
Jan 10 11:59:30 np0005580781 podman[96072]: 2026-01-10 16:59:30.693537022 +0000 UTC m=+0.105469928 container attach aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 11:59:30 np0005580781 systemd[1]: libpod-aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994.scope: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[96072]: 2026-01-10 16:59:30.694889861 +0000 UTC m=+0.106822767 container died aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 10 11:59:30 np0005580781 podman[96072]: 2026-01-10 16:59:30.615675383 +0000 UTC m=+0.027608289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:30 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b9468a2aa5caa98d3cb3b5bd452afc14516073fa2f2b4a23e0b47c0ef4d31a6a-merged.mount: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[96072]: 2026-01-10 16:59:30.72843101 +0000 UTC m=+0.140363916 container remove aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:30 np0005580781 systemd[1]: libpod-conmon-aee1520777e755ca8702ed4690fb42e7ebf12c4889c409dc0055f8e093d58994.scope: Deactivated successfully.
Jan 10 11:59:30 np0005580781 podman[96110]: 2026-01-10 16:59:30.877527987 +0000 UTC m=+0.037300779 container create 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 11:59:30 np0005580781 systemd[1]: Started libpod-conmon-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope.
Jan 10 11:59:30 np0005580781 podman[96110]: 2026-01-10 16:59:30.860555747 +0000 UTC m=+0.020328559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 11:59:30 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:30 np0005580781 podman[96110]: 2026-01-10 16:59:30.99326895 +0000 UTC m=+0.153041772 container init 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 11:59:30 np0005580781 podman[96110]: 2026-01-10 16:59:30.999677475 +0000 UTC m=+0.159450267 container start 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 11:59:31 np0005580781 podman[96110]: 2026-01-10 16:59:31.003506106 +0000 UTC m=+0.163278918 container attach 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 11:59:31 np0005580781 python3[96171]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:31 np0005580781 podman[96201]: 2026-01-10 16:59:31.596045243 +0000 UTC m=+0.057396389 container create 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 11:59:31 np0005580781 systemd[1]: Started libpod-conmon-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope.
Jan 10 11:59:31 np0005580781 systemd[1]: Started libcrun container.
Jan 10 11:59:31 np0005580781 podman[96201]: 2026-01-10 16:59:31.573150672 +0000 UTC m=+0.034501838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 10 11:59:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347dbaed3930284afcda3747f9ac6e5a360e07fefff5e64f30e557b050a6a53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6347dbaed3930284afcda3747f9ac6e5a360e07fefff5e64f30e557b050a6a53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 11:59:31 np0005580781 podman[96201]: 2026-01-10 16:59:31.693659943 +0000 UTC m=+0.155011119 container init 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:31 np0005580781 podman[96201]: 2026-01-10 16:59:31.702095947 +0000 UTC m=+0.163447093 container start 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:31 np0005580781 podman[96201]: 2026-01-10 16:59:31.705036361 +0000 UTC m=+0.166387507 container attach 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 11:59:31 np0005580781 lvm[96249]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 11:59:31 np0005580781 lvm[96249]: VG ceph_vg0 finished
Jan 10 11:59:31 np0005580781 lvm[96253]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 11:59:31 np0005580781 lvm[96253]: VG ceph_vg2 finished
Jan 10 11:59:31 np0005580781 lvm[96251]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 11:59:31 np0005580781 lvm[96251]: VG ceph_vg1 finished
Jan 10 11:59:31 np0005580781 nifty_lichterman[96127]: {}
Jan 10 11:59:31 np0005580781 systemd[1]: libpod-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope: Deactivated successfully.
Jan 10 11:59:31 np0005580781 systemd[1]: libpod-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope: Consumed 1.576s CPU time.
Jan 10 11:59:31 np0005580781 conmon[96127]: conmon 47d9331e2bfe0c8c4af0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope/container/memory.events
Jan 10 11:59:31 np0005580781 podman[96110]: 2026-01-10 16:59:31.930757932 +0000 UTC m=+1.090530734 container died 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:31 np0005580781 systemd[1]: var-lib-containers-storage-overlay-978ba196a9d9dcbe378f860e0c4b7fd933b237c06bbc55bd00f83be3982feb29-merged.mount: Deactivated successfully.
Jan 10 11:59:31 np0005580781 podman[96110]: 2026-01-10 16:59:31.976057701 +0000 UTC m=+1.135830493 container remove 47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 11:59:31 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 11:59:32 np0005580781 systemd[1]: libpod-conmon-47d9331e2bfe0c8c4af0901368f6de32c9e4ba4cae1400ef8c80d8d9984b88fb.scope: Deactivated successfully.
Jan 10 11:59:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 11:59:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 11:59:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 10 11:59:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481888757' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 10 11:59:32 np0005580781 boring_zhukovsky[96235]: 
Jan 10 11:59:32 np0005580781 systemd[1]: libpod-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope: Deactivated successfully.
Jan 10 11:59:32 np0005580781 conmon[96235]: conmon 79e03de1f85459d0c198 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope/container/memory.events
Jan 10 11:59:32 np0005580781 boring_zhukovsky[96235]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":6}}
Jan 10 11:59:32 np0005580781 podman[96201]: 2026-01-10 16:59:32.248205292 +0000 UTC m=+0.709556468 container died 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 11:59:32 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6347dbaed3930284afcda3747f9ac6e5a360e07fefff5e64f30e557b050a6a53-merged.mount: Deactivated successfully.
Jan 10 11:59:32 np0005580781 podman[96201]: 2026-01-10 16:59:32.306599439 +0000 UTC m=+0.767950615 container remove 79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e (image=quay.io/ceph/ceph:v20, name=boring_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 11:59:32 np0005580781 systemd[1]: libpod-conmon-79e03de1f85459d0c198df58bac614f476493d75622690265683281b667faa7e.scope: Deactivated successfully.
Jan 10 11:59:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:33 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 10 11:59:35 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:37 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_16:59:37
Jan 10 11:59:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 11:59:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 11:59:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Jan 10 11:59:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 11:59:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.028637363845935e-07 of space, bias 4.0, pg target 0.0009634364836615122 quantized to 16 (current 1)
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 10 11:59:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 10 11:59:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 11:59:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 10 11:59:39 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev a46ea606-41f4-4921-a552-5d2ac27c9fda (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:39 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 11:59:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:40 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=14.661141396s) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active pruub 71.834594727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 10 11:59:40 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev 3dd6f5ab-0a05-494e-96e3-019574b3283c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 10 11:59:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:40 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=14.661141396s) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown pruub 71.834594727s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 10 11:59:41 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev 610b1bc9-e7d4-41a4-a3a0-adc17524e440 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=18/19 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=17/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 10 11:59:41 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 10 11:59:41 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v89: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 11:59:41 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 10 11:59:42 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=12.631856918s) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 79.404685974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 10 11:59:42 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev b0b60259-8b16-4f62-958b-7d70bbc65ea2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 10 11:59:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 10 11:59:42 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=12.631856918s) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 79.404685974s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:42 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.402929306s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 87.063102722s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:42 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.402929306s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 87.063102722s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev 421bf9b0-54aa-4ee0-8769-b1782640febe (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [1] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 10 11:59:43 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:43 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v92: 100 pgs: 1 peering, 93 unknown, 6 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 11:59:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] update: starting ev d2f0cba0-11b1-40dc-8eb1-a074c7118ba1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev a46ea606-41f4-4921-a552-5d2ac27c9fda (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event a46ea606-41f4-4921-a552-5d2ac27c9fda (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev 3dd6f5ab-0a05-494e-96e3-019574b3283c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event 3dd6f5ab-0a05-494e-96e3-019574b3283c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev 610b1bc9-e7d4-41a4-a3a0-adc17524e440 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event 610b1bc9-e7d4-41a4-a3a0-adc17524e440 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev b0b60259-8b16-4f62-958b-7d70bbc65ea2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event b0b60259-8b16-4f62-958b-7d70bbc65ea2 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev 421bf9b0-54aa-4ee0-8769-b1782640febe (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event 421bf9b0-54aa-4ee0-8769-b1782640febe (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] complete: finished ev d2f0cba0-11b1-40dc-8eb1-a074c7118ba1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] Completed event d2f0cba0-11b1-40dc-8eb1-a074c7118ba1 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 10 11:59:44 np0005580781 ceph-mgr[75538]: [progress INFO root] Writing back 10 completed events
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 10 11:59:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:44 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 10 11:59:44 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 10 11:59:45 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 10 11:59:45 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 10 11:59:45 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:45 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 11:59:45 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 10 11:59:45 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 10 11:59:45 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v94: 146 pgs: 77 unknown, 69 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 10 11:59:45 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 10 11:59:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 10 11:59:46 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 75.884468079s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:46 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 75.884468079s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:46 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 10 11:59:46 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 10 11:59:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 10 11:59:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 10 11:59:47 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 10 11:59:47 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=12.622920990s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 33'38 active pruub 90.103996277s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=12.622920990s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 unknown pruub 90.103996277s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:47 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 10 11:59:47 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 10 11:59:47 np0005580781 systemd-logind[798]: New session 34 of user zuul.
Jan 10 11:59:47 np0005580781 systemd[1]: Started Session 34 of User zuul.
Jan 10 11:59:47 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 11:59:47 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 16 peering, 62 unknown, 99 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:48 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 10 11:59:48 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 10 11:59:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=13.497598648s) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active pruub 86.496665955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=13.497598648s) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown pruub 86.496665955s@ mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 10 11:59:48 np0005580781 python3.9[96479]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:48 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 10 11:59:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 10 11:59:49 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=40/43 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:49 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 16 peering, 31 unknown, 130 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:50 np0005580781 python3.9[96697]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 11:59:51 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 10 11:59:51 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 10 11:59:51 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 16 peering, 161 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:52 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 10 11:59:52 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 10 11:59:52 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 10 11:59:52 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 10 11:59:53 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 10 11:59:53 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 10 11:59:53 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 10 11:59:53 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 10 11:59:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:53 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 11:59:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.946036339s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196174622s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025462151s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275505066s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945967674s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196235657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025725365s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275756836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944757462s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195938110s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944560051s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196243286s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943988800s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195838928s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023788452s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275848389s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023344040s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276062012s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942822456s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195831299s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022882462s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276046753s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022719383s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276260376s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941881180s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195747375s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022457123s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276466370s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941703796s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196022034s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021822929s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276275635s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021719933s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276496887s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939940453s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194961548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021199226s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276519775s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939948082s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195663452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020483971s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276596069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938565254s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194801331s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937766075s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194831848s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020411491s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276550293s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937079430s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194824219s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936864853s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194839478s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018429756s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018333435s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277122498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936121941s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194946289s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017609596s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276664734s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934791565s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934700012s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017175674s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934269905s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193969727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017211914s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277099609s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930295944s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.190406799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.661448479s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491836548s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.661420822s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491836548s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967473984s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798049927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967448235s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798049927s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016772270s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277183533s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967326164s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798057556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.967307091s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798057556s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933568954s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194084167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.660369873s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491561890s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.660344124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491561890s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016240120s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277198792s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932877541s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193984985s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966688156s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798217773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966665268s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798217773s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015887260s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277290344s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933269501s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195274353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966203690s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798019409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.966114044s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798019409s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.659540176s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491722107s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.659503937s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491722107s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965217590s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798019409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.658833504s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491676331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.658818245s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491676331s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965171814s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798019409s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965347290s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798385620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.965298653s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798385620s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.964876175s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.798011780s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.964852333s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.798011780s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932654381s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194847107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955799103s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796836853s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650593758s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491889954s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650234222s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491912842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954932213s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796813965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649713516s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491882324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649621010s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492034912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649418831s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492172241s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648921013s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491943359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953218460s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796546936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648669243s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953178406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796897888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952614784s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796539307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648002625s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492103577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952114105s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796447754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956947327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440971375s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956913948s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440971375s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956763268s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.441024780s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956607819s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440879822s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956736565s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.441024780s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956587791s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440879822s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956534386s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440910339s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956438065s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440856934s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956491470s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440910339s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956419945s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440856934s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956384659s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440849304s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956365585s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440849304s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956201553s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440826416s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956261635s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440933228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956164360s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440826416s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.956238747s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440933228s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570457458s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055160522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.570424080s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955455780s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440795898s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955396652s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440795898s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955231667s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440811157s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955141068s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569484711s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055030823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955207825s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440811157s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569366455s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055030823s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955098152s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569361687s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.055160522s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.569333076s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.055160522s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568908691s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054847717s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568881989s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054847717s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954612732s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440727234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954586029s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440734863s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954518318s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440734863s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954583168s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440727234s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568431854s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054718018s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645599365s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492210388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.568410873s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954238892s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440666199s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949820518s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796524048s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954300880s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954220772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440666199s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645345688s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492187500s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954264641s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440750122s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567855835s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054450989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949461937s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567838669s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054450989s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953746796s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440574646s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645701408s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492774963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953726768s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440574646s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949236870s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948904037s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796211243s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953638077s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440696716s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953615189s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440696716s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644659996s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953246117s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440460205s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949141502s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796890259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953224182s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644832611s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492797852s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948488235s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796607971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644400597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492721558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947714806s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796150208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.567257881s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.054718018s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644159317s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492759705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947475433s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796226501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562947273s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 active pruub 95.050613403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.562922478s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.050613403s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952595711s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440460205s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44 pruub=10.566867828s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 95.054718018s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952573776s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440460205s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953011513s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 97.440956116s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952985764s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 97.440956116s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642189980s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492256165s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 11:59:54 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 10 11:59:55 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 11:59:55 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 42 peering, 135 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:56 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 10 11:59:56 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 10 11:59:56 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 10 11:59:56 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 10 11:59:57 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 10 11:59:57 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 10 11:59:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 42 peering, 135 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 11:59:58 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 10 11:59:58 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 10 11:59:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 11:59:58 np0005580781 systemd[1]: session-34.scope: Deactivated successfully.
Jan 10 11:59:58 np0005580781 systemd[1]: session-34.scope: Consumed 9.111s CPU time.
Jan 10 11:59:58 np0005580781 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Jan 10 11:59:58 np0005580781 systemd-logind[798]: Removed session 34.
Jan 10 11:59:59 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 10 11:59:59 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 10 11:59:59 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 10 11:59:59 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 10 12:00:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 12:00:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 10 12:00:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 10 12:00:00 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 10 12:00:00 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 10 12:00:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 10 12:00:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 10 12:00:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 10 12:00:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 10 12:00:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 10 12:00:00 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 10 12:00:00 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918597221s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055480957s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918471336s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055480957s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918116570s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055183411s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.918060303s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055183411s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913496971s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.050857544s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.913447380s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.050857544s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917839050s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 103.055328369s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:00 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 46 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46 pruub=11.917801857s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055328369s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:00 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:00 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:00 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:00 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:01 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 10 12:00:01 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 10 12:00:01 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 10 12:00:01 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 10 12:00:01 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 10 12:00:01 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 10 12:00:01 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 10 12:00:01 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 10 12:00:01 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:01 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:01 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:01 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 12:00:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 10 12:00:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 10 12:00:02 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 10 12:00:02 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 10 12:00:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 10 12:00:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 10 12:00:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 10 12:00:02 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926258087s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995513916s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925914764s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995262146s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925272942s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995002747s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924705505s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.994689941s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:02 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:02 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 10 12:00:02 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:02 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:02 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:02 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:03 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 10 12:00:03 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 10 12:00:03 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 10 12:00:03 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 10 12:00:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 10 12:00:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 10 12:00:03 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 10 12:00:03 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 10 12:00:03 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:03 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:03 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:03 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 49 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=0 lpr=48 pi=[44,48)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 10 12:00:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 10 12:00:04 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 10 12:00:04 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 10 12:00:04 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 10 12:00:04 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 10 12:00:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 10 12:00:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 10 12:00:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 10 12:00:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 10 12:00:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 10 12:00:04 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 10 12:00:04 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313828468s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 active pruub 103.055412292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:04 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.313767433s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.055412292s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:04 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312836647s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 active pruub 103.054672241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:04 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 50 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50 pruub=8.312801361s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 103.054672241s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 10 12:00:04 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:04 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:05 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 10 12:00:05 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 10 12:00:05 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 10 12:00:05 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 10 12:00:05 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 10 12:00:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 10 12:00:05 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:05 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 12:00:06 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 10 12:00:07 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 10 12:00:07 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 10 12:00:07 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 10 12:00:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 12:00:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:08 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 10 12:00:08 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 10 12:00:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:00:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:00:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:00:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:00:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:00:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:00:09 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 10 12:00:09 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 10 12:00:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 346 B/s, 1 keys/s, 2 objects/s recovering
Jan 10 12:00:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 10 12:00:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 10 12:00:10 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 10 12:00:10 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 10 12:00:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 10 12:00:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 10 12:00:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 10 12:00:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 10 12:00:10 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 10 12:00:11 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.995003700s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995697021s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:11 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:11 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994361877s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995353699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:11 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:11 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:11 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 10 12:00:11 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 10 12:00:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 301 B/s, 1 keys/s, 1 objects/s recovering
Jan 10 12:00:12 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 10 12:00:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 10 12:00:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 10 12:00:12 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 10 12:00:12 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 10 12:00:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 10 12:00:12 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 10 12:00:12 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:12 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 53 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=52/53 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:12 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 10 12:00:12 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 10 12:00:12 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 10 12:00:12 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 10 12:00:12 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 10 12:00:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 10 12:00:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 10 12:00:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 10 12:00:13 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 10 12:00:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 10 12:00:13 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 10 12:00:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 270 B/s, 0 objects/s recovering
Jan 10 12:00:14 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 10 12:00:14 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 10 12:00:14 np0005580781 systemd-logind[798]: New session 35 of user zuul.
Jan 10 12:00:14 np0005580781 systemd[1]: Started Session 35 of User zuul.
Jan 10 12:00:15 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 10 12:00:15 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 10 12:00:15 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 10 12:00:15 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 10 12:00:15 np0005580781 python3.9[96907]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 10 12:00:15 np0005580781 systemd[76625]: Starting Mark boot as successful...
Jan 10 12:00:15 np0005580781 systemd[76625]: Finished Mark boot as successful.
Jan 10 12:00:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 2 peering, 175 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:16 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 10 12:00:16 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 10 12:00:16 np0005580781 python3.9[97082]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:00:17 np0005580781 python3.9[97238]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:00:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 10 12:00:18 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 10 12:00:18 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 10 12:00:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:18 np0005580781 python3.9[97391]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:00:19 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 10 12:00:19 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 10 12:00:19 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 10 12:00:19 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 10 12:00:19 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 10 12:00:19 np0005580781 python3.9[97545]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:00:19 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 10 12:00:19 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 10 12:00:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Jan 10 12:00:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 10 12:00:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 10 12:00:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 10 12:00:20 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 10 12:00:20 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 10 12:00:20 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 10 12:00:20 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 10 12:00:20 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209659576s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 active pruub 119.055152893s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:20 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 56 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=8.209583282s) [2] r=-1 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.055152893s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:20 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:20 np0005580781 python3.9[97697]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:00:21 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 10 12:00:21 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 10 12:00:21 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 10 12:00:21 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 10 12:00:21 np0005580781 python3.9[97847]: ansible-ansible.builtin.service_facts Invoked
Jan 10 12:00:21 np0005580781 network[97864]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 12:00:21 np0005580781 network[97865]: 'network-scripts' will be removed from distribution in near future.
Jan 10 12:00:21 np0005580781 network[97866]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 12:00:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 10 12:00:21 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 10 12:00:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 10 12:00:21 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 10 12:00:21 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:22 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 10 12:00:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Jan 10 12:00:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 10 12:00:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 10 12:00:22 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 10 12:00:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 10 12:00:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 10 12:00:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 10 12:00:22 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 10 12:00:22 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 10 12:00:22 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692907333s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 active pruub 119.995796204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:22 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:22 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 58 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:23 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 10 12:00:23 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 10 12:00:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 10 12:00:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 10 12:00:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 10 12:00:23 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 10 12:00:23 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=0 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 10 12:00:24 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 10 12:00:24 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 10 12:00:24 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 10 12:00:24 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 10 12:00:24 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 10 12:00:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 10 12:00:24 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 10 12:00:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 10 12:00:24 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 10 12:00:24 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 10 12:00:24 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564128876s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 active pruub 118.069427490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:24 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:24 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:25 np0005580781 python3.9[98126]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:00:25 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 10 12:00:25 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 10 12:00:25 np0005580781 python3.9[98276]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:00:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 10 12:00:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 10 12:00:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 10 12:00:25 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 10 12:00:25 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=0 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 10 12:00:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 10 12:00:26 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 10 12:00:26 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 10 12:00:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 10 12:00:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 10 12:00:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 10 12:00:26 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 10 12:00:26 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 10 12:00:26 np0005580781 python3.9[98430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:00:27 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 10 12:00:27 np0005580781 python3.9[98588]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:00:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 10 12:00:28 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380357742s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 active pruub 133.736038208s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:28 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 62 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.380233765s) [1] r=-1 lpr=62 pi=[48,62)/1 crt=33'39 unknown NOTIFY pruub 133.736038208s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:28 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 10 12:00:28 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:28 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 10 12:00:28 np0005580781 python3.9[98672]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:00:29 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 10 12:00:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 12:00:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 10 12:00:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 10 12:00:30 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 10 12:00:30 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 10 12:00:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 10 12:00:30 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 10 12:00:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 10 12:00:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 10 12:00:30 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 10 12:00:31 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 10 12:00:31 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 10 12:00:31 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018323898s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 active pruub 134.575500488s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:31 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 64 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=13.018235207s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=33'39 unknown NOTIFY pruub 134.575500488s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:31 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 10 12:00:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 10 12:00:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 10 12:00:31 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 10 12:00:31 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 12:00:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 10 12:00:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 10 12:00:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 10 12:00:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 10 12:00:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 10 12:00:32 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 10 12:00:32 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 10 12:00:33 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 10 12:00:33 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:33 np0005580781 podman[98884]: 2026-01-10 17:00:33.78621491 +0000 UTC m=+0.045880311 container create 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 12:00:33 np0005580781 systemd[1]: Started libpod-conmon-7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3.scope.
Jan 10 12:00:33 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:00:33 np0005580781 podman[98884]: 2026-01-10 17:00:33.76751309 +0000 UTC m=+0.027178521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:00:33 np0005580781 podman[98884]: 2026-01-10 17:00:33.878124024 +0000 UTC m=+0.137789455 container init 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:00:33 np0005580781 podman[98884]: 2026-01-10 17:00:33.885911364 +0000 UTC m=+0.145576785 container start 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 12:00:33 np0005580781 podman[98884]: 2026-01-10 17:00:33.889745963 +0000 UTC m=+0.149411384 container attach 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:00:33 np0005580781 musing_franklin[98900]: 167 167
Jan 10 12:00:33 np0005580781 systemd[1]: libpod-7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3.scope: Deactivated successfully.
Jan 10 12:00:33 np0005580781 podman[98884]: 2026-01-10 17:00:33.89459624 +0000 UTC m=+0.154261661 container died 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 12:00:33 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4b9d4240e6d10a56ccea08051c60a74f670d48f4a819c1c2626c18c447482bec-merged.mount: Deactivated successfully.
Jan 10 12:00:33 np0005580781 podman[98884]: 2026-01-10 17:00:33.933220005 +0000 UTC m=+0.192885406 container remove 7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_franklin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:00:33 np0005580781 systemd[1]: libpod-conmon-7806112114f7577c65d32611f71a614d2d2695b201fd0bc784997e9c849f03f3.scope: Deactivated successfully.
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:00:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:00:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 10 12:00:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 10 12:00:34 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 10 12:00:34 np0005580781 podman[98924]: 2026-01-10 17:00:34.089202294 +0000 UTC m=+0.045675935 container create 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:00:34 np0005580781 systemd[1]: Started libpod-conmon-260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf.scope.
Jan 10 12:00:34 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:00:34 np0005580781 podman[98924]: 2026-01-10 17:00:34.071155353 +0000 UTC m=+0.027629034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:00:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:34 np0005580781 podman[98924]: 2026-01-10 17:00:34.181764507 +0000 UTC m=+0.138238148 container init 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 12:00:34 np0005580781 podman[98924]: 2026-01-10 17:00:34.190443482 +0000 UTC m=+0.146917143 container start 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 12:00:34 np0005580781 podman[98924]: 2026-01-10 17:00:34.194221699 +0000 UTC m=+0.150695390 container attach 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 12:00:34 np0005580781 condescending_grothendieck[98941]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:00:34 np0005580781 condescending_grothendieck[98941]: --> All data devices are unavailable
Jan 10 12:00:34 np0005580781 systemd[1]: libpod-260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf.scope: Deactivated successfully.
Jan 10 12:00:34 np0005580781 podman[98924]: 2026-01-10 17:00:34.698268099 +0000 UTC m=+0.654741750 container died 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 12:00:34 np0005580781 systemd[1]: var-lib-containers-storage-overlay-7d8be473c39483e49c2ba33b38deef63751ae3d47d0e066db40b3b48459268cc-merged.mount: Deactivated successfully.
Jan 10 12:00:34 np0005580781 podman[98924]: 2026-01-10 17:00:34.750916691 +0000 UTC m=+0.707390332 container remove 260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:00:34 np0005580781 systemd[1]: libpod-conmon-260702f257bf71109f54292c1b2b5b044f2deb70ef365050711224ec8da3d4cf.scope: Deactivated successfully.
Jan 10 12:00:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 10 12:00:34 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 10 12:00:34 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 10 12:00:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 10 12:00:34 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 10 12:00:35 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 10 12:00:35 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 10 12:00:35 np0005580781 podman[99043]: 2026-01-10 17:00:35.272159589 +0000 UTC m=+0.044785500 container create c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 12:00:35 np0005580781 systemd[1]: Started libpod-conmon-c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68.scope.
Jan 10 12:00:35 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 10 12:00:35 np0005580781 podman[99043]: 2026-01-10 17:00:35.255072985 +0000 UTC m=+0.027698866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:00:35 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:00:35 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 10 12:00:35 np0005580781 podman[99043]: 2026-01-10 17:00:35.387386543 +0000 UTC m=+0.160012424 container init c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 12:00:35 np0005580781 podman[99043]: 2026-01-10 17:00:35.393539828 +0000 UTC m=+0.166165729 container start c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:00:35 np0005580781 relaxed_nobel[99060]: 167 167
Jan 10 12:00:35 np0005580781 systemd[1]: libpod-c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68.scope: Deactivated successfully.
Jan 10 12:00:35 np0005580781 podman[99043]: 2026-01-10 17:00:35.399073175 +0000 UTC m=+0.171699046 container attach c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 12:00:35 np0005580781 podman[99043]: 2026-01-10 17:00:35.399509087 +0000 UTC m=+0.172134968 container died c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:00:35 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6b5bbd021456820ce5268344c82a626950efe0d699502a6ab4d061983e522d43-merged.mount: Deactivated successfully.
Jan 10 12:00:35 np0005580781 podman[99043]: 2026-01-10 17:00:35.444838521 +0000 UTC m=+0.217464382 container remove c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:00:35 np0005580781 systemd[1]: libpod-conmon-c20e266ab48459e7579c3079b5c434980abdbe446a214d80e3bdfed46aa8ac68.scope: Deactivated successfully.
Jan 10 12:00:35 np0005580781 podman[99084]: 2026-01-10 17:00:35.602740965 +0000 UTC m=+0.040759826 container create f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 12:00:35 np0005580781 systemd[1]: Started libpod-conmon-f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d.scope.
Jan 10 12:00:35 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:00:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:35 np0005580781 podman[99084]: 2026-01-10 17:00:35.584639252 +0000 UTC m=+0.022658133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:00:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:35 np0005580781 podman[99084]: 2026-01-10 17:00:35.69751302 +0000 UTC m=+0.135531911 container init f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 12:00:35 np0005580781 podman[99084]: 2026-01-10 17:00:35.712881855 +0000 UTC m=+0.150900716 container start f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:00:35 np0005580781 podman[99084]: 2026-01-10 17:00:35.717894737 +0000 UTC m=+0.155913598 container attach f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:00:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]: {
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:    "0": [
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:        {
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "devices": [
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "/dev/loop3"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            ],
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_name": "ceph_lv0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_size": "21470642176",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "name": "ceph_lv0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "tags": {
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cluster_name": "ceph",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.crush_device_class": "",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.encrypted": "0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.objectstore": "bluestore",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osd_id": "0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.type": "block",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.vdo": "0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.with_tpm": "0"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            },
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "type": "block",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "vg_name": "ceph_vg0"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:        }
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:    ],
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:    "1": [
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:        {
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "devices": [
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "/dev/loop4"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            ],
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_name": "ceph_lv1",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_size": "21470642176",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "name": "ceph_lv1",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "tags": {
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cluster_name": "ceph",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.crush_device_class": "",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.encrypted": "0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.objectstore": "bluestore",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osd_id": "1",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.type": "block",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.vdo": "0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.with_tpm": "0"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            },
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "type": "block",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "vg_name": "ceph_vg1"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:        }
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:    ],
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:    "2": [
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:        {
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "devices": [
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "/dev/loop5"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            ],
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_name": "ceph_lv2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_size": "21470642176",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "name": "ceph_lv2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "tags": {
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.cluster_name": "ceph",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.crush_device_class": "",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.encrypted": "0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.objectstore": "bluestore",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osd_id": "2",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.type": "block",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.vdo": "0",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:                "ceph.with_tpm": "0"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            },
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "type": "block",
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:            "vg_name": "ceph_vg2"
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:        }
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]:    ]
Jan 10 12:00:36 np0005580781 jovial_lichterman[99100]: }
Jan 10 12:00:36 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 10 12:00:36 np0005580781 systemd[1]: libpod-f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d.scope: Deactivated successfully.
Jan 10 12:00:36 np0005580781 podman[99084]: 2026-01-10 17:00:36.067492012 +0000 UTC m=+0.505510873 container died f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 12:00:36 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c6a149a7755436951a72da4edf0acfdb4e878994a38cc0e219c2d0896595767e-merged.mount: Deactivated successfully.
Jan 10 12:00:36 np0005580781 podman[99084]: 2026-01-10 17:00:36.139975506 +0000 UTC m=+0.577994367 container remove f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lichterman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 12:00:36 np0005580781 systemd[1]: libpod-conmon-f45824ee63c6d787265729f153515d164ed8d30eeec4f6ee280ed20e7d570a5d.scope: Deactivated successfully.
Jan 10 12:00:36 np0005580781 podman[99186]: 2026-01-10 17:00:36.568030204 +0000 UTC m=+0.020078110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:00:36 np0005580781 podman[99186]: 2026-01-10 17:00:36.80262132 +0000 UTC m=+0.254669226 container create 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:00:36 np0005580781 systemd[1]: Started libpod-conmon-1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504.scope.
Jan 10 12:00:36 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:00:37 np0005580781 podman[99186]: 2026-01-10 17:00:37.009071715 +0000 UTC m=+0.461119621 container init 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 12:00:37 np0005580781 podman[99186]: 2026-01-10 17:00:37.019855955 +0000 UTC m=+0.471903821 container start 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:00:37 np0005580781 dazzling_moser[99202]: 167 167
Jan 10 12:00:37 np0005580781 systemd[1]: libpod-1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504.scope: Deactivated successfully.
Jan 10 12:00:37 np0005580781 podman[99186]: 2026-01-10 17:00:37.038048782 +0000 UTC m=+0.490096688 container attach 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:00:37 np0005580781 podman[99186]: 2026-01-10 17:00:37.038538007 +0000 UTC m=+0.490585883 container died 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 12:00:37 np0005580781 systemd[1]: var-lib-containers-storage-overlay-7ede883d89862905a42416e9d84caacab7e0c3ef034ecb42e8c52678d3517fa7-merged.mount: Deactivated successfully.
Jan 10 12:00:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:00:37
Jan 10 12:00:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:00:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:00:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'volumes', 'backups', 'images', 'vms', 'cephfs.cephfs.data']
Jan 10 12:00:37 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Jan 10 12:00:38 np0005580781 podman[99186]: 2026-01-10 17:00:38.106040141 +0000 UTC m=+1.558088017 container remove 1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:00:38 np0005580781 systemd[1]: libpod-conmon-1a93aedad4283f4ffe575dd21da22e6f9e81782555977bc3f0c375028027d504.scope: Deactivated successfully.
Jan 10 12:00:38 np0005580781 podman[99228]: 2026-01-10 17:00:38.298398196 +0000 UTC m=+0.057213612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:00:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:00:38 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483779907s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 active pruub 141.736114502s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:00:38 np0005580781 ceph-osd[85764]: osd.0 pg_epoch: 67 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67 pruub=12.483655930s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=33'39 unknown NOTIFY pruub 141.736114502s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:00:38 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 10 12:00:38 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:00:38 np0005580781 podman[99228]: 2026-01-10 17:00:38.998803541 +0000 UTC m=+0.757618957 container create 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:00:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:00:39 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 10 12:00:39 np0005580781 systemd[1]: Started libpod-conmon-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope.
Jan 10 12:00:39 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:00:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:00:39 np0005580781 podman[99228]: 2026-01-10 17:00:39.104971229 +0000 UTC m=+0.863786645 container init 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:00:39 np0005580781 podman[99228]: 2026-01-10 17:00:39.112496779 +0000 UTC m=+0.871312185 container start 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 12:00:39 np0005580781 podman[99228]: 2026-01-10 17:00:39.116549653 +0000 UTC m=+0.875365059 container attach 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 12:00:39 np0005580781 lvm[99323]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:00:39 np0005580781 lvm[99323]: VG ceph_vg0 finished
Jan 10 12:00:39 np0005580781 lvm[99324]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:00:39 np0005580781 lvm[99324]: VG ceph_vg1 finished
Jan 10 12:00:39 np0005580781 lvm[99326]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:00:39 np0005580781 lvm[99326]: VG ceph_vg2 finished
Jan 10 12:00:39 np0005580781 musing_swartz[99245]: {}
Jan 10 12:00:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 10 12:00:39 np0005580781 systemd[1]: libpod-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope: Deactivated successfully.
Jan 10 12:00:39 np0005580781 systemd[1]: libpod-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope: Consumed 1.443s CPU time.
Jan 10 12:00:39 np0005580781 podman[99228]: 2026-01-10 17:00:39.993925863 +0000 UTC m=+1.752741299 container died 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:00:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 10 12:00:40 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 10 12:00:40 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:00:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 10 12:00:40 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ac38948ad77b9e1e7615ba4d3aae2ea1b11449688d26e6e60b3020132e4af505-merged.mount: Deactivated successfully.
Jan 10 12:00:40 np0005580781 podman[99228]: 2026-01-10 17:00:40.058987503 +0000 UTC m=+1.817802909 container remove 2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:00:40 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 10 12:00:40 np0005580781 systemd[1]: libpod-conmon-2ac3111036bb14a3e0e70f8764170032e38e01d585b57aecddf22502736841fd.scope: Deactivated successfully.
Jan 10 12:00:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:00:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:00:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:00:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:00:40 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 10 12:00:40 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 10 12:00:40 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 10 12:00:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:00:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:00:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 B/s, 0 objects/s recovering
Jan 10 12:00:43 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 10 12:00:43 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 10 12:00:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 67 B/s, 0 objects/s recovering
Jan 10 12:00:44 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 10 12:00:44 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.290204970656704e-07 of space, bias 4.0, pg target 0.0011148245964788044 quantized to 16 (current 16)
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:00:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:00:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 12:00:47 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 10 12:00:47 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 10 12:00:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 12:00:48 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 10 12:00:48 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 10 12:00:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Jan 10 12:00:50 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 10 12:00:50 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 10 12:00:50 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 10 12:00:50 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 10 12:00:51 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 10 12:00:51 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 10 12:00:51 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 10 12:00:51 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 10 12:00:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Jan 10 12:00:52 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 10 12:00:52 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 10 12:00:52 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 10 12:00:52 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 10 12:00:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:54 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 10 12:00:54 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 10 12:00:55 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 10 12:00:55 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 10 12:00:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:56 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 10 12:00:56 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 10 12:00:57 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 10 12:00:57 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 10 12:00:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:00:58 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 10 12:00:58 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 10 12:00:58 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 10 12:00:58 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 10 12:00:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:00:59 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 10 12:00:59 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 10 12:01:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 10 12:01:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 10 12:01:01 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 10 12:01:01 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 10 12:01:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:02 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 10 12:01:02 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 10 12:01:03 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 10 12:01:03 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 10 12:01:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:05 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 10 12:01:05 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 10 12:01:05 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 10 12:01:05 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 10 12:01:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:07 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 10 12:01:07 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 10 12:01:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:01:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:01:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:01:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:01:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:01:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:01:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:10 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 10 12:01:10 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 10 12:01:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:12 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 10 12:01:12 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 10 12:01:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 10 12:01:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 10 12:01:14 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 10 12:01:14 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 10 12:01:15 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 10 12:01:15 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 10 12:01:15 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 10 12:01:15 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 10 12:01:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:18 np0005580781 python3.9[99607]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:01:18 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 10 12:01:18 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 10 12:01:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:20 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 10 12:01:20 np0005580781 python3.9[99894]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 10 12:01:20 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 10 12:01:21 np0005580781 python3.9[100046]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 10 12:01:21 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 10 12:01:21 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 10 12:01:21 np0005580781 python3.9[100198]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:01:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:22 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 10 12:01:22 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 10 12:01:22 np0005580781 python3.9[100350]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 10 12:01:23 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 10 12:01:23 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 10 12:01:23 np0005580781 python3.9[100502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:01:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:24 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 10 12:01:24 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 10 12:01:24 np0005580781 python3.9[100654]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:01:25 np0005580781 python3.9[100732]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:01:25 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 10 12:01:25 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 10 12:01:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:26 np0005580781 python3.9[100884]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:01:27 np0005580781 python3.9[101038]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 10 12:01:27 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 10 12:01:27 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 10 12:01:27 np0005580781 python3.9[101191]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 10 12:01:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:28 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 10 12:01:28 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 10 12:01:28 np0005580781 python3.9[101344]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 12:01:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:29 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 10 12:01:29 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 10 12:01:29 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 10 12:01:29 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 10 12:01:29 np0005580781 python3.9[101496]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 10 12:01:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:30 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 10 12:01:30 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 10 12:01:30 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 10 12:01:30 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 10 12:01:30 np0005580781 python3.9[101648]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:01:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:32 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 10 12:01:32 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 10 12:01:32 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 10 12:01:32 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 10 12:01:32 np0005580781 python3.9[101801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:01:33 np0005580781 python3.9[101953]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:01:33 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 10 12:01:33 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 10 12:01:33 np0005580781 python3.9[102031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:01:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:34 np0005580781 python3.9[102183]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:01:34 np0005580781 python3.9[102261]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:01:35 np0005580781 python3.9[102413]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:01:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:37 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 10 12:01:37 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:01:37
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes']
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:38 np0005580781 python3.9[102564]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:01:38 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 10 12:01:38 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 10 12:01:38 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 10 12:01:38 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:01:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:38 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:01:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:01:39 np0005580781 python3.9[102716]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 10 12:01:39 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 10 12:01:39 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 10 12:01:39 np0005580781 python3.9[102866]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:01:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:01:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:01:40 np0005580781 python3.9[103082]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:01:41 np0005580781 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 10 12:01:41 np0005580781 systemd[1]: tuned.service: Deactivated successfully.
Jan 10 12:01:41 np0005580781 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 10 12:01:41 np0005580781 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 10 12:01:41 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 10 12:01:41 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 10 12:01:41 np0005580781 podman[103168]: 2026-01-10 17:01:41.409182561 +0000 UTC m=+0.082028046 container create 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 12:01:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:01:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:01:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:01:41 np0005580781 podman[103168]: 2026-01-10 17:01:41.351388763 +0000 UTC m=+0.024234268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:01:41 np0005580781 systemd[1]: Started libpod-conmon-000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3.scope.
Jan 10 12:01:41 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:01:41 np0005580781 podman[103168]: 2026-01-10 17:01:41.510334118 +0000 UTC m=+0.183179673 container init 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 12:01:41 np0005580781 podman[103168]: 2026-01-10 17:01:41.519292772 +0000 UTC m=+0.192138267 container start 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:01:41 np0005580781 podman[103168]: 2026-01-10 17:01:41.522997857 +0000 UTC m=+0.195843362 container attach 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 12:01:41 np0005580781 jolly_curie[103186]: 167 167
Jan 10 12:01:41 np0005580781 systemd[1]: libpod-000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3.scope: Deactivated successfully.
Jan 10 12:01:41 np0005580781 podman[103168]: 2026-01-10 17:01:41.528343939 +0000 UTC m=+0.201189424 container died 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 12:01:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3aab4e681fe84b6b9e5adf2f14bd1a610da4f4146e4ad27371316dee6511c0e5-merged.mount: Deactivated successfully.
Jan 10 12:01:41 np0005580781 podman[103168]: 2026-01-10 17:01:41.569629759 +0000 UTC m=+0.242475244 container remove 000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 12:01:41 np0005580781 systemd[1]: libpod-conmon-000b2e0937879957347675201c7de2327d78896118f0d356708612c4c81e21b3.scope: Deactivated successfully.
Jan 10 12:01:41 np0005580781 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 10 12:01:41 np0005580781 podman[103220]: 2026-01-10 17:01:41.739500074 +0000 UTC m=+0.051757148 container create b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:01:41 np0005580781 systemd[1]: Started libpod-conmon-b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78.scope.
Jan 10 12:01:41 np0005580781 podman[103220]: 2026-01-10 17:01:41.714281189 +0000 UTC m=+0.026538233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:01:41 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:01:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:41 np0005580781 podman[103220]: 2026-01-10 17:01:41.829863746 +0000 UTC m=+0.142120810 container init b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:01:41 np0005580781 podman[103220]: 2026-01-10 17:01:41.839581021 +0000 UTC m=+0.151838055 container start b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:01:41 np0005580781 podman[103220]: 2026-01-10 17:01:41.843184083 +0000 UTC m=+0.155441147 container attach b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:01:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:42 np0005580781 python3.9[103382]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 10 12:01:42 np0005580781 fervent_tu[103254]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:01:42 np0005580781 fervent_tu[103254]: --> All data devices are unavailable
Jan 10 12:01:42 np0005580781 systemd[1]: libpod-b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78.scope: Deactivated successfully.
Jan 10 12:01:42 np0005580781 podman[103220]: 2026-01-10 17:01:42.445919321 +0000 UTC m=+0.758176375 container died b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:01:42 np0005580781 systemd[1]: var-lib-containers-storage-overlay-7497cb18f3ca89bf21136d4f0de5bcba617467e881368d12d344a1a52c915b12-merged.mount: Deactivated successfully.
Jan 10 12:01:42 np0005580781 podman[103220]: 2026-01-10 17:01:42.493962496 +0000 UTC m=+0.806219530 container remove b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:01:42 np0005580781 systemd[1]: libpod-conmon-b2b171dbe5ffe2a508c40c6e2e0de1c6bada1fff4d06fa99be0c344720ceec78.scope: Deactivated successfully.
Jan 10 12:01:43 np0005580781 podman[103494]: 2026-01-10 17:01:43.068824186 +0000 UTC m=+0.055700933 container create f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:01:43 np0005580781 systemd[1]: Started libpod-conmon-f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859.scope.
Jan 10 12:01:43 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:01:43 np0005580781 podman[103494]: 2026-01-10 17:01:43.04360158 +0000 UTC m=+0.030478457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:01:43 np0005580781 podman[103494]: 2026-01-10 17:01:43.150631981 +0000 UTC m=+0.137508828 container init f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 12:01:43 np0005580781 podman[103494]: 2026-01-10 17:01:43.159814091 +0000 UTC m=+0.146690888 container start f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:01:43 np0005580781 naughty_albattani[103510]: 167 167
Jan 10 12:01:43 np0005580781 podman[103494]: 2026-01-10 17:01:43.164016651 +0000 UTC m=+0.150893438 container attach f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:01:43 np0005580781 systemd[1]: libpod-f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859.scope: Deactivated successfully.
Jan 10 12:01:43 np0005580781 podman[103494]: 2026-01-10 17:01:43.165547984 +0000 UTC m=+0.152424801 container died f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:01:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6df8e85110fa5979245f72674140df0755cbd693c3ccc50dce372240baa555f5-merged.mount: Deactivated successfully.
Jan 10 12:01:43 np0005580781 podman[103494]: 2026-01-10 17:01:43.218398316 +0000 UTC m=+0.205275113 container remove f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_albattani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:01:43 np0005580781 systemd[1]: libpod-conmon-f914c5917fba935e53881b74909710ade22c9ae541e944461d2a95667933d859.scope: Deactivated successfully.
Jan 10 12:01:43 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 10 12:01:43 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 10 12:01:43 np0005580781 podman[103534]: 2026-01-10 17:01:43.394856318 +0000 UTC m=+0.054235021 container create 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:01:43 np0005580781 systemd[1]: Started libpod-conmon-0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a.scope.
Jan 10 12:01:43 np0005580781 podman[103534]: 2026-01-10 17:01:43.371739632 +0000 UTC m=+0.031118315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:01:43 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:01:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:43 np0005580781 podman[103534]: 2026-01-10 17:01:43.487958373 +0000 UTC m=+0.147337096 container init 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:01:43 np0005580781 podman[103534]: 2026-01-10 17:01:43.497441663 +0000 UTC m=+0.156820336 container start 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 10 12:01:43 np0005580781 podman[103534]: 2026-01-10 17:01:43.503074833 +0000 UTC m=+0.162453546 container attach 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 12:01:43 np0005580781 kind_davinci[103550]: {
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:    "0": [
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:        {
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "devices": [
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "/dev/loop3"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            ],
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_name": "ceph_lv0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_size": "21470642176",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "name": "ceph_lv0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "tags": {
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cluster_name": "ceph",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.crush_device_class": "",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.encrypted": "0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.objectstore": "bluestore",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osd_id": "0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.type": "block",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.vdo": "0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.with_tpm": "0"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            },
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "type": "block",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "vg_name": "ceph_vg0"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:        }
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:    ],
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:    "1": [
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:        {
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "devices": [
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "/dev/loop4"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            ],
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_name": "ceph_lv1",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_size": "21470642176",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "name": "ceph_lv1",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "tags": {
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cluster_name": "ceph",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.crush_device_class": "",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.encrypted": "0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.objectstore": "bluestore",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osd_id": "1",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.type": "block",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.vdo": "0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.with_tpm": "0"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            },
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "type": "block",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "vg_name": "ceph_vg1"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:        }
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:    ],
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:    "2": [
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:        {
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "devices": [
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "/dev/loop5"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            ],
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_name": "ceph_lv2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_size": "21470642176",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "name": "ceph_lv2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "tags": {
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.cluster_name": "ceph",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.crush_device_class": "",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.encrypted": "0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.objectstore": "bluestore",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osd_id": "2",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.type": "block",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.vdo": "0",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:                "ceph.with_tpm": "0"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            },
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "type": "block",
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:            "vg_name": "ceph_vg2"
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:        }
Jan 10 12:01:43 np0005580781 kind_davinci[103550]:    ]
Jan 10 12:01:43 np0005580781 kind_davinci[103550]: }
Jan 10 12:01:43 np0005580781 systemd[1]: libpod-0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a.scope: Deactivated successfully.
Jan 10 12:01:43 np0005580781 podman[103534]: 2026-01-10 17:01:43.851356667 +0000 UTC m=+0.510735400 container died 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:01:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay-89cf8b0c1172334acc05fa3af3dbd903465f134348bde563e4791006e17ce686-merged.mount: Deactivated successfully.
Jan 10 12:01:43 np0005580781 podman[103534]: 2026-01-10 17:01:43.914097759 +0000 UTC m=+0.573476412 container remove 0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_davinci, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 12:01:43 np0005580781 systemd[1]: libpod-conmon-0c60dca3ae43e24784238265e134641fa4ad651936b906c453ecbf7f466d677a.scope: Deactivated successfully.
Jan 10 12:01:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:01:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:01:44 np0005580781 python3.9[103743]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:01:44 np0005580781 podman[103761]: 2026-01-10 17:01:44.432799215 +0000 UTC m=+0.052570485 container create c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:01:44 np0005580781 systemd[1]: Started libpod-conmon-c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a.scope.
Jan 10 12:01:44 np0005580781 podman[103761]: 2026-01-10 17:01:44.407326551 +0000 UTC m=+0.027097871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:01:44 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:01:44 np0005580781 podman[103761]: 2026-01-10 17:01:44.526063164 +0000 UTC m=+0.145834584 container init c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 10 12:01:44 np0005580781 podman[103761]: 2026-01-10 17:01:44.5343808 +0000 UTC m=+0.154152070 container start c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:01:44 np0005580781 podman[103761]: 2026-01-10 17:01:44.538313682 +0000 UTC m=+0.158084972 container attach c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 12:01:44 np0005580781 clever_meitner[103778]: 167 167
Jan 10 12:01:44 np0005580781 systemd[1]: libpod-c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a.scope: Deactivated successfully.
Jan 10 12:01:44 np0005580781 podman[103761]: 2026-01-10 17:01:44.540603717 +0000 UTC m=+0.160374997 container died c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 12:01:44 np0005580781 systemd[1]: var-lib-containers-storage-overlay-d0454058dbbea5b21cad6d42b4905541eaf410ba9e246a1bc5355a8bcb87244f-merged.mount: Deactivated successfully.
Jan 10 12:01:44 np0005580781 podman[103761]: 2026-01-10 17:01:44.578386971 +0000 UTC m=+0.198158241 container remove c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:01:44 np0005580781 systemd[1]: libpod-conmon-c057b405eff814588ce5ba2474d6faa7b09d97d824b5904664cf03780f4ceb1a.scope: Deactivated successfully.
Jan 10 12:01:44 np0005580781 podman[103877]: 2026-01-10 17:01:44.753735822 +0000 UTC m=+0.051949177 container create 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:01:44 np0005580781 podman[103877]: 2026-01-10 17:01:44.732078487 +0000 UTC m=+0.030291852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:01:44 np0005580781 systemd[1]: Started libpod-conmon-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope.
Jan 10 12:01:44 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:01:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:01:45 np0005580781 podman[103877]: 2026-01-10 17:01:45.222879529 +0000 UTC m=+0.521092914 container init 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:01:45 np0005580781 podman[103877]: 2026-01-10 17:01:45.237258578 +0000 UTC m=+0.535471933 container start 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 12:01:45 np0005580781 python3.9[103971]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:01:45 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 10 12:01:45 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 10 12:01:45 np0005580781 podman[103877]: 2026-01-10 17:01:45.460572672 +0000 UTC m=+0.758786067 container attach 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:01:45 np0005580781 systemd[1]: session-35.scope: Deactivated successfully.
Jan 10 12:01:45 np0005580781 systemd[1]: session-35.scope: Consumed 1min 10.570s CPU time.
Jan 10 12:01:45 np0005580781 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Jan 10 12:01:45 np0005580781 systemd-logind[798]: Removed session 35.
Jan 10 12:01:45 np0005580781 lvm[104074]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:01:45 np0005580781 lvm[104073]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:01:45 np0005580781 lvm[104073]: VG ceph_vg0 finished
Jan 10 12:01:45 np0005580781 lvm[104074]: VG ceph_vg1 finished
Jan 10 12:01:45 np0005580781 lvm[104076]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:01:45 np0005580781 lvm[104076]: VG ceph_vg2 finished
Jan 10 12:01:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:46 np0005580781 musing_hopper[103947]: {}
Jan 10 12:01:46 np0005580781 systemd[1]: libpod-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope: Deactivated successfully.
Jan 10 12:01:46 np0005580781 systemd[1]: libpod-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope: Consumed 1.397s CPU time.
Jan 10 12:01:46 np0005580781 podman[103877]: 2026-01-10 17:01:46.119984624 +0000 UTC m=+1.418197989 container died 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:01:46 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4b77299e35e0935e2b2cd2733546aa1a1e63504e45d020e340881dcaeadadab3-merged.mount: Deactivated successfully.
Jan 10 12:01:46 np0005580781 podman[103877]: 2026-01-10 17:01:46.291939639 +0000 UTC m=+1.590153004 container remove 9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_hopper, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 12:01:46 np0005580781 systemd[1]: libpod-conmon-9e603cbaf74050d79bb506fc67051645930b41cc2f72f9f4dc4d248627606720.scope: Deactivated successfully.
Jan 10 12:01:46 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 10 12:01:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:01:46 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 10 12:01:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:01:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:01:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:01:47 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:01:47 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:01:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:49 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 10 12:01:49 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 10 12:01:49 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 10 12:01:49 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 10 12:01:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:51 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 10 12:01:51 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 10 12:01:51 np0005580781 systemd-logind[798]: New session 36 of user zuul.
Jan 10 12:01:51 np0005580781 systemd[1]: Started Session 36 of User zuul.
Jan 10 12:01:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:52 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 10 12:01:52 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 10 12:01:52 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 10 12:01:52 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 10 12:01:52 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 10 12:01:52 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 10 12:01:52 np0005580781 python3.9[104270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:01:53 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 10 12:01:53 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 10 12:01:53 np0005580781 python3.9[104426]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 10 12:01:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:01:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:54 np0005580781 python3.9[104579]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:01:55 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 10 12:01:55 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 10 12:01:55 np0005580781 python3.9[104663]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 12:01:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:56 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 10 12:01:56 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 10 12:01:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:01:58 np0005580781 python3.9[104816]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:01:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 10 12:02:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 10 12:02:00 np0005580781 python3.9[104969]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:02:01 np0005580781 python3.9[105122]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:02:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:02 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 10 12:02:02 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 10 12:02:02 np0005580781 python3.9[105274]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 10 12:02:03 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 10 12:02:03 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 10 12:02:03 np0005580781 python3.9[105424]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:02:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:04 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 10 12:02:04 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 10 12:02:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 10 12:02:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 10 12:02:04 np0005580781 python3.9[105582]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:02:04 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 10 12:02:04 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 10 12:02:05 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 10 12:02:05 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 10 12:02:05 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 10 12:02:05 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 10 12:02:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:06 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 10 12:02:06 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 10 12:02:06 np0005580781 python3.9[105735]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:02:07 np0005580781 python3.9[106022]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 10 12:02:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:08 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 10 12:02:08 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 10 12:02:08 np0005580781 python3.9[106172]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:02:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:02:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:02:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:02:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:02:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:02:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:02:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:09 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 10 12:02:09 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 10 12:02:09 np0005580781 python3.9[106326]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:02:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:10 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 10 12:02:10 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 10 12:02:11 np0005580781 python3.9[106479]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:02:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:12 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 10 12:02:12 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 10 12:02:13 np0005580781 python3.9[106632]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:02:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 10 12:02:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 10 12:02:14 np0005580781 python3.9[106786]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 10 12:02:15 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 10 12:02:15 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 10 12:02:15 np0005580781 systemd[1]: session-36.scope: Deactivated successfully.
Jan 10 12:02:15 np0005580781 systemd[1]: session-36.scope: Consumed 18.810s CPU time.
Jan 10 12:02:15 np0005580781 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Jan 10 12:02:15 np0005580781 systemd-logind[798]: Removed session 36.
Jan 10 12:02:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:16 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 10 12:02:16 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 10 12:02:17 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 10 12:02:17 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 10 12:02:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:18 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 10 12:02:18 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 10 12:02:18 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 10 12:02:18 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 10 12:02:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 10 12:02:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 10 12:02:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:20 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 10 12:02:20 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 10 12:02:20 np0005580781 systemd-logind[798]: New session 37 of user zuul.
Jan 10 12:02:20 np0005580781 systemd[1]: Started Session 37 of User zuul.
Jan 10 12:02:21 np0005580781 python3.9[106964]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:02:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:22 np0005580781 python3.9[107118]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:02:23 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 10 12:02:23 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 10 12:02:23 np0005580781 python3.9[107311]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:02:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:24 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 10 12:02:24 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 10 12:02:24 np0005580781 systemd[1]: session-37.scope: Deactivated successfully.
Jan 10 12:02:24 np0005580781 systemd[1]: session-37.scope: Consumed 2.603s CPU time.
Jan 10 12:02:24 np0005580781 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Jan 10 12:02:24 np0005580781 systemd-logind[798]: Removed session 37.
Jan 10 12:02:25 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 10 12:02:25 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 10 12:02:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:28 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 10 12:02:28 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 10 12:02:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:29 np0005580781 systemd-logind[798]: New session 38 of user zuul.
Jan 10 12:02:29 np0005580781 systemd[1]: Started Session 38 of User zuul.
Jan 10 12:02:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:30 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 10 12:02:30 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 10 12:02:30 np0005580781 python3.9[107490]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:02:31 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 10 12:02:31 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 10 12:02:31 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 10 12:02:31 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 10 12:02:31 np0005580781 python3.9[107644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:02:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:32 np0005580781 python3.9[107800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:02:33 np0005580781 python3.9[107884]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:02:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:34 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 10 12:02:34 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 10 12:02:34 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 10 12:02:34 np0005580781 ceph-osd[85764]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 10 12:02:35 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 10 12:02:35 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 10 12:02:35 np0005580781 python3.9[108039]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:02:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:37 np0005580781 python3.9[108234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:02:37 np0005580781 python3.9[108386]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:02:38
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'images']
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:38 np0005580781 python3.9[108551]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:02:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:02:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:02:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:02:39 np0005580781 python3.9[108629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:02:39 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 10 12:02:39 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 10 12:02:40 np0005580781 python3.9[108781]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:02:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:40 np0005580781 python3.9[108859]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:02:41 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 10 12:02:41 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 10 12:02:41 np0005580781 python3.9[109011]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:02:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:42 np0005580781 python3.9[109163]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:02:42 np0005580781 python3.9[109315]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:02:43 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 10 12:02:43 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 10 12:02:43 np0005580781 python3.9[109467]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:02:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:02:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:02:44 np0005580781 python3.9[109619]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:02:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:46 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 10 12:02:46 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 10 12:02:46 np0005580781 python3.9[109772]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:02:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:02:47 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 10 12:02:47 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 10 12:02:47 np0005580781 python3.9[110032]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:02:47 np0005580781 podman[110095]: 2026-01-10 17:02:47.651729469 +0000 UTC m=+0.043517759 container create fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:02:47 np0005580781 systemd[1]: Started libpod-conmon-fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0.scope.
Jan 10 12:02:47 np0005580781 podman[110095]: 2026-01-10 17:02:47.635402023 +0000 UTC m=+0.027190343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:02:47 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:02:47 np0005580781 podman[110095]: 2026-01-10 17:02:47.751978105 +0000 UTC m=+0.143766435 container init fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:02:47 np0005580781 podman[110095]: 2026-01-10 17:02:47.760579339 +0000 UTC m=+0.152367639 container start fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 12:02:47 np0005580781 podman[110095]: 2026-01-10 17:02:47.764095665 +0000 UTC m=+0.155883975 container attach fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 12:02:47 np0005580781 hardcore_euler[110116]: 167 167
Jan 10 12:02:47 np0005580781 systemd[1]: libpod-fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0.scope: Deactivated successfully.
Jan 10 12:02:47 np0005580781 podman[110095]: 2026-01-10 17:02:47.767104937 +0000 UTC m=+0.158893237 container died fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:02:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay-32a7e4d49bef48ceb62d64f8bce932dd4101843c16ff0d4430a22446cbf98432-merged.mount: Deactivated successfully.
Jan 10 12:02:47 np0005580781 podman[110095]: 2026-01-10 17:02:47.808279321 +0000 UTC m=+0.200067611 container remove fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 12:02:47 np0005580781 systemd[1]: libpod-conmon-fec2727e1157a42eb905d7fa3edaaa3384f331b7d9b3c765f1cbfd5e99d160e0.scope: Deactivated successfully.
Jan 10 12:02:47 np0005580781 podman[110235]: 2026-01-10 17:02:47.967966579 +0000 UTC m=+0.044910926 container create 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:02:48 np0005580781 systemd[1]: Started libpod-conmon-224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3.scope.
Jan 10 12:02:48 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:02:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:48 np0005580781 podman[110235]: 2026-01-10 17:02:48.041249979 +0000 UTC m=+0.118194376 container init 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:02:48 np0005580781 podman[110235]: 2026-01-10 17:02:47.950353839 +0000 UTC m=+0.027298216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:02:48 np0005580781 podman[110235]: 2026-01-10 17:02:48.052810265 +0000 UTC m=+0.129754622 container start 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 12:02:48 np0005580781 podman[110235]: 2026-01-10 17:02:48.058724916 +0000 UTC m=+0.135669303 container attach 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 12:02:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:48 np0005580781 python3.9[110279]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:02:48 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:02:48 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:02:48 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:02:48 np0005580781 wizardly_ellis[110280]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:02:48 np0005580781 wizardly_ellis[110280]: --> All data devices are unavailable
Jan 10 12:02:48 np0005580781 systemd[1]: libpod-224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3.scope: Deactivated successfully.
Jan 10 12:02:48 np0005580781 podman[110235]: 2026-01-10 17:02:48.617921327 +0000 UTC m=+0.694865684 container died 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 12:02:48 np0005580781 systemd[1]: var-lib-containers-storage-overlay-beaf53ebe700309851883469d879ff968dcc0f50654303bafc9667360f623497-merged.mount: Deactivated successfully.
Jan 10 12:02:48 np0005580781 systemd[76625]: Created slice User Background Tasks Slice.
Jan 10 12:02:48 np0005580781 podman[110235]: 2026-01-10 17:02:48.677502273 +0000 UTC m=+0.754446620 container remove 224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_ellis, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:02:48 np0005580781 systemd[76625]: Starting Cleanup of User's Temporary Files and Directories...
Jan 10 12:02:48 np0005580781 systemd[1]: libpod-conmon-224cfc8e7d4aa5962a8ed77d9e23487f566438e5622e4f6fa7e3ebdece25adb3.scope: Deactivated successfully.
Jan 10 12:02:48 np0005580781 systemd[76625]: Finished Cleanup of User's Temporary Files and Directories.
Jan 10 12:02:48 np0005580781 python3.9[110490]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:02:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:49 np0005580781 podman[110547]: 2026-01-10 17:02:49.128387108 +0000 UTC m=+0.040375663 container create 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:02:49 np0005580781 systemd[1]: Started libpod-conmon-7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c.scope.
Jan 10 12:02:49 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:02:49 np0005580781 podman[110547]: 2026-01-10 17:02:49.198675797 +0000 UTC m=+0.110664362 container init 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 12:02:49 np0005580781 podman[110547]: 2026-01-10 17:02:49.204545207 +0000 UTC m=+0.116533762 container start 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:02:49 np0005580781 optimistic_panini[110571]: 167 167
Jan 10 12:02:49 np0005580781 podman[110547]: 2026-01-10 17:02:49.207829536 +0000 UTC m=+0.119818091 container attach 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:02:49 np0005580781 podman[110547]: 2026-01-10 17:02:49.112244958 +0000 UTC m=+0.024233533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:02:49 np0005580781 systemd[1]: libpod-7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c.scope: Deactivated successfully.
Jan 10 12:02:49 np0005580781 podman[110547]: 2026-01-10 17:02:49.2094215 +0000 UTC m=+0.121410055 container died 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 12:02:49 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e1e54025596cb42c39f8762b0b33cda82d73d631c5773645c6acb651a9eb109d-merged.mount: Deactivated successfully.
Jan 10 12:02:49 np0005580781 podman[110547]: 2026-01-10 17:02:49.245955437 +0000 UTC m=+0.157943992 container remove 7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 12:02:49 np0005580781 systemd[1]: libpod-conmon-7d20398867b9ce409d03967b73853ffdd8b11bf99dd59384a06daf75c742d24c.scope: Deactivated successfully.
Jan 10 12:02:49 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 10 12:02:49 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 10 12:02:49 np0005580781 podman[110647]: 2026-01-10 17:02:49.416256765 +0000 UTC m=+0.042457190 container create a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:02:49 np0005580781 systemd[1]: Started libpod-conmon-a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99.scope.
Jan 10 12:02:49 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:02:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:49 np0005580781 podman[110647]: 2026-01-10 17:02:49.394577193 +0000 UTC m=+0.020777658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:02:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:49 np0005580781 podman[110647]: 2026-01-10 17:02:49.501984604 +0000 UTC m=+0.128185049 container init a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:02:49 np0005580781 podman[110647]: 2026-01-10 17:02:49.513876789 +0000 UTC m=+0.140077214 container start a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 12:02:49 np0005580781 podman[110647]: 2026-01-10 17:02:49.519286726 +0000 UTC m=+0.145487231 container attach a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 12:02:49 np0005580781 bold_yonath[110664]: {
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:    "0": [
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:        {
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "devices": [
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "/dev/loop3"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            ],
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_name": "ceph_lv0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_size": "21470642176",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "name": "ceph_lv0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "tags": {
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cluster_name": "ceph",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.crush_device_class": "",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.encrypted": "0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.objectstore": "bluestore",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osd_id": "0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.type": "block",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.vdo": "0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.with_tpm": "0"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            },
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "type": "block",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "vg_name": "ceph_vg0"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:        }
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:    ],
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:    "1": [
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:        {
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "devices": [
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "/dev/loop4"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            ],
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_name": "ceph_lv1",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_size": "21470642176",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "name": "ceph_lv1",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "tags": {
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cluster_name": "ceph",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.crush_device_class": "",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.encrypted": "0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.objectstore": "bluestore",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osd_id": "1",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.type": "block",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.vdo": "0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.with_tpm": "0"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            },
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "type": "block",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "vg_name": "ceph_vg1"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:        }
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:    ],
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:    "2": [
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:        {
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "devices": [
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "/dev/loop5"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            ],
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_name": "ceph_lv2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_size": "21470642176",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "name": "ceph_lv2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "tags": {
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.cluster_name": "ceph",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.crush_device_class": "",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.encrypted": "0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.objectstore": "bluestore",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osd_id": "2",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.type": "block",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.vdo": "0",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:                "ceph.with_tpm": "0"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            },
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "type": "block",
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:            "vg_name": "ceph_vg2"
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:        }
Jan 10 12:02:49 np0005580781 bold_yonath[110664]:    ]
Jan 10 12:02:49 np0005580781 bold_yonath[110664]: }
Jan 10 12:02:49 np0005580781 systemd[1]: libpod-a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99.scope: Deactivated successfully.
Jan 10 12:02:49 np0005580781 podman[110647]: 2026-01-10 17:02:49.898599638 +0000 UTC m=+0.524800133 container died a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:02:49 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4fb15d4f7e2b591e578bee3a899514d18bbad58ffbfcc5f5ac5065a05bc1884c-merged.mount: Deactivated successfully.
Jan 10 12:02:49 np0005580781 python3.9[110746]: ansible-service_facts Invoked
Jan 10 12:02:49 np0005580781 podman[110647]: 2026-01-10 17:02:49.966955044 +0000 UTC m=+0.593155499 container remove a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 12:02:49 np0005580781 systemd[1]: libpod-conmon-a8b934ff9225dc5d6aae6edd8ec6676c85e8e07b7117a843d1a6116d706bca99.scope: Deactivated successfully.
Jan 10 12:02:50 np0005580781 network[110777]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 12:02:50 np0005580781 network[110778]: 'network-scripts' will be removed from distribution in near future.
Jan 10 12:02:50 np0005580781 network[110779]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 12:02:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:50 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 10 12:02:50 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 10 12:02:51 np0005580781 podman[110863]: 2026-01-10 17:02:51.085228092 +0000 UTC m=+0.054141509 container create 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:02:51 np0005580781 systemd[1]: Started libpod-conmon-44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506.scope.
Jan 10 12:02:51 np0005580781 podman[110863]: 2026-01-10 17:02:51.053244909 +0000 UTC m=+0.022158336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:02:51 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:02:51 np0005580781 podman[110863]: 2026-01-10 17:02:51.170511189 +0000 UTC m=+0.139424606 container init 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:02:51 np0005580781 podman[110863]: 2026-01-10 17:02:51.179383241 +0000 UTC m=+0.148296638 container start 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:02:51 np0005580781 podman[110863]: 2026-01-10 17:02:51.18300804 +0000 UTC m=+0.151921557 container attach 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:02:51 np0005580781 jovial_kirch[110883]: 167 167
Jan 10 12:02:51 np0005580781 systemd[1]: libpod-44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506.scope: Deactivated successfully.
Jan 10 12:02:51 np0005580781 podman[110863]: 2026-01-10 17:02:51.188553741 +0000 UTC m=+0.157467158 container died 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 12:02:51 np0005580781 systemd[1]: var-lib-containers-storage-overlay-bf3bbc4cdfd25f28db5bfbe0605b21ebf113c13f6841342fd54adbc729e4e0a6-merged.mount: Deactivated successfully.
Jan 10 12:02:51 np0005580781 podman[110863]: 2026-01-10 17:02:51.230764443 +0000 UTC m=+0.199677830 container remove 44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kirch, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 10 12:02:51 np0005580781 systemd[1]: libpod-conmon-44a5934e3225e949d21dfe98d49526db52394a5115578db3a3ca6c54dadfa506.scope: Deactivated successfully.
Jan 10 12:02:51 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 10 12:02:51 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 10 12:02:51 np0005580781 podman[110917]: 2026-01-10 17:02:51.434032511 +0000 UTC m=+0.051903108 container create e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 10 12:02:51 np0005580781 systemd[1]: Started libpod-conmon-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope.
Jan 10 12:02:51 np0005580781 podman[110917]: 2026-01-10 17:02:51.409562893 +0000 UTC m=+0.027433460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:02:51 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:02:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:02:51 np0005580781 podman[110917]: 2026-01-10 17:02:51.541461443 +0000 UTC m=+0.159332020 container init e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 12:02:51 np0005580781 podman[110917]: 2026-01-10 17:02:51.557463599 +0000 UTC m=+0.175334136 container start e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:02:51 np0005580781 podman[110917]: 2026-01-10 17:02:51.562482876 +0000 UTC m=+0.180353643 container attach e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 12:02:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:52 np0005580781 lvm[111043]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:02:52 np0005580781 lvm[111043]: VG ceph_vg0 finished
Jan 10 12:02:52 np0005580781 lvm[111044]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:02:52 np0005580781 lvm[111044]: VG ceph_vg1 finished
Jan 10 12:02:52 np0005580781 lvm[111046]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:02:52 np0005580781 lvm[111046]: VG ceph_vg2 finished
Jan 10 12:02:52 np0005580781 romantic_mclaren[110937]: {}
Jan 10 12:02:52 np0005580781 systemd[1]: libpod-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope: Deactivated successfully.
Jan 10 12:02:52 np0005580781 systemd[1]: libpod-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope: Consumed 1.410s CPU time.
Jan 10 12:02:52 np0005580781 podman[110917]: 2026-01-10 17:02:52.461680706 +0000 UTC m=+1.079551243 container died e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:02:52 np0005580781 systemd[1]: var-lib-containers-storage-overlay-059c50753939439fce09707bf8e128b4894be75fd54afc32b7a8bd40406a8134-merged.mount: Deactivated successfully.
Jan 10 12:02:52 np0005580781 podman[110917]: 2026-01-10 17:02:52.509164172 +0000 UTC m=+1.127034719 container remove e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mclaren, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:02:52 np0005580781 systemd[1]: libpod-conmon-e73dacfea26e89089c1685b483868c770690d719f71fadc495d5d0e54275d4de.scope: Deactivated successfully.
Jan 10 12:02:52 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:02:52 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:02:52 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:02:52 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:02:53 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:02:53 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:02:53 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 10 12:02:53 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 10 12:02:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:56 np0005580781 python3.9[111470]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:02:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:02:58 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 10 12:02:58 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 10 12:02:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:02:59 np0005580781 python3.9[111623]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 10 12:02:59 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 10 12:02:59 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 10 12:03:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 10 12:03:00 np0005580781 python3.9[111775]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:00 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 10 12:03:00 np0005580781 python3.9[111853]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:01 np0005580781 python3.9[112005]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:02 np0005580781 python3.9[112083]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:03 np0005580781 python3.9[112235]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:03 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 10 12:03:03 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 10 12:03:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:04 np0005580781 python3.9[112387]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:03:05 np0005580781 python3.9[112471]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:03:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:06 np0005580781 systemd[1]: session-38.scope: Deactivated successfully.
Jan 10 12:03:06 np0005580781 systemd[1]: session-38.scope: Consumed 25.671s CPU time.
Jan 10 12:03:06 np0005580781 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Jan 10 12:03:06 np0005580781 systemd-logind[798]: Removed session 38.
Jan 10 12:03:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:03:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:03:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:03:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:03:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:03:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:03:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:11 np0005580781 systemd-logind[798]: New session 39 of user zuul.
Jan 10 12:03:12 np0005580781 systemd[1]: Started Session 39 of User zuul.
Jan 10 12:03:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:12 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 10 12:03:12 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 10 12:03:12 np0005580781 python3.9[112653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:13 np0005580781 python3.9[112805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:14 np0005580781 python3.9[112883]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:14 np0005580781 systemd[1]: session-39.scope: Deactivated successfully.
Jan 10 12:03:14 np0005580781 systemd[1]: session-39.scope: Consumed 1.771s CPU time.
Jan 10 12:03:14 np0005580781 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Jan 10 12:03:14 np0005580781 systemd-logind[798]: Removed session 39.
Jan 10 12:03:15 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 10 12:03:15 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 10 12:03:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:16 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 10 12:03:16 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 10 12:03:17 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 10 12:03:17 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 10 12:03:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:20 np0005580781 systemd-logind[798]: New session 40 of user zuul.
Jan 10 12:03:20 np0005580781 systemd[1]: Started Session 40 of User zuul.
Jan 10 12:03:21 np0005580781 python3.9[113062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:03:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:22 np0005580781 python3.9[113218]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:23 np0005580781 python3.9[113393]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:24 np0005580781 python3.9[113471]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.8a12f6b2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:25 np0005580781 python3.9[113623]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:25 np0005580781 python3.9[113701]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.w9dsm5cd recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:26 np0005580781 python3.9[113853]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:03:27 np0005580781 python3.9[114005]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:27 np0005580781 python3.9[114083]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:03:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:28 np0005580781 python3.9[114235]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:28 np0005580781 python3.9[114313]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:03:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:29 np0005580781 python3.9[114465]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:30 np0005580781 python3.9[114617]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:30 np0005580781 python3.9[114695]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:31 np0005580781 python3.9[114847]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:31 np0005580781 python3.9[114925]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:32 np0005580781 python3.9[115077]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:03:32 np0005580781 systemd[1]: Reloading.
Jan 10 12:03:33 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:03:33 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:03:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:34 np0005580781 python3.9[115267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:34 np0005580781 python3.9[115345]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:35 np0005580781 python3.9[115497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:35 np0005580781 python3.9[115575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:36 np0005580781 python3.9[115727]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:03:36 np0005580781 systemd[1]: Reloading.
Jan 10 12:03:36 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:03:36 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:03:36 np0005580781 systemd[1]: Starting Create netns directory...
Jan 10 12:03:36 np0005580781 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 12:03:36 np0005580781 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 12:03:36 np0005580781 systemd[1]: Finished Create netns directory.
Jan 10 12:03:37 np0005580781 python3.9[115919]: ansible-ansible.builtin.service_facts Invoked
Jan 10 12:03:37 np0005580781 network[115936]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 12:03:37 np0005580781 network[115937]: 'network-scripts' will be removed from distribution in near future.
Jan 10 12:03:37 np0005580781 network[115938]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:03:38
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'images', 'vms']
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:03:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:03:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:03:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:42 np0005580781 python3.9[116200]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:43 np0005580781 python3.9[116278]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:43 np0005580781 python3.9[116430]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:03:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:03:44 np0005580781 python3.9[116582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:45 np0005580781 python3.9[116660]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:46 np0005580781 python3.9[116812]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 10 12:03:46 np0005580781 systemd[1]: Starting Time & Date Service...
Jan 10 12:03:46 np0005580781 systemd[1]: Started Time & Date Service.
Jan 10 12:03:47 np0005580781 python3.9[116968]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:47 np0005580781 python3.9[117120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:48 np0005580781 python3.9[117198]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:49 np0005580781 python3.9[117350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:49 np0005580781 python3.9[117428]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tq8x4k1f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:50 np0005580781 python3.9[117580]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:50 np0005580781 python3.9[117658]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:51 np0005580781 python3.9[117810]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:03:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:52 np0005580781 python3[117963]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 12:03:53 np0005580781 python3.9[118165]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:03:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:03:53 np0005580781 python3.9[118273]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:53 np0005580781 podman[118413]: 2026-01-10 17:03:53.794245063 +0000 UTC m=+0.044641642 container create 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:03:53 np0005580781 systemd[1]: Started libpod-conmon-192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434.scope.
Jan 10 12:03:53 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:03:53 np0005580781 podman[118413]: 2026-01-10 17:03:53.776045369 +0000 UTC m=+0.026441968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:03:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:54 np0005580781 podman[118413]: 2026-01-10 17:03:54.073025047 +0000 UTC m=+0.323421686 container init 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:03:54 np0005580781 podman[118413]: 2026-01-10 17:03:54.080764735 +0000 UTC m=+0.331161314 container start 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 12:03:54 np0005580781 podman[118413]: 2026-01-10 17:03:54.084425639 +0000 UTC m=+0.334822228 container attach 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 12:03:54 np0005580781 pedantic_chatterjee[118452]: 167 167
Jan 10 12:03:54 np0005580781 systemd[1]: libpod-192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434.scope: Deactivated successfully.
Jan 10 12:03:54 np0005580781 podman[118413]: 2026-01-10 17:03:54.092004713 +0000 UTC m=+0.342401292 container died 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:03:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:54 np0005580781 python3.9[118507]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:54 np0005580781 systemd[1]: var-lib-containers-storage-overlay-04ba19d5facd5e93192f34b298c58f901a42a44a584fdb3ab510019d8258e36a-merged.mount: Deactivated successfully.
Jan 10 12:03:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:03:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:03:54 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:03:54 np0005580781 podman[118413]: 2026-01-10 17:03:54.226079319 +0000 UTC m=+0.476475928 container remove 192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:03:54 np0005580781 systemd[1]: libpod-conmon-192fa4025785a5a863637c68d77bfbc7c5a5d607417d180a22eddebd866b9434.scope: Deactivated successfully.
Jan 10 12:03:54 np0005580781 podman[118577]: 2026-01-10 17:03:54.392184871 +0000 UTC m=+0.044857848 container create edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 10 12:03:54 np0005580781 systemd[1]: Started libpod-conmon-edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a.scope.
Jan 10 12:03:54 np0005580781 podman[118577]: 2026-01-10 17:03:54.373684688 +0000 UTC m=+0.026357675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:03:54 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:03:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:54 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:54 np0005580781 podman[118577]: 2026-01-10 17:03:54.509824033 +0000 UTC m=+0.162497000 container init edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:03:54 np0005580781 podman[118577]: 2026-01-10 17:03:54.51677613 +0000 UTC m=+0.169449097 container start edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:03:54 np0005580781 podman[118577]: 2026-01-10 17:03:54.520056322 +0000 UTC m=+0.172729359 container attach edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:03:54 np0005580781 python3.9[118621]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:55 np0005580781 ecstatic_mccarthy[118624]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:03:55 np0005580781 ecstatic_mccarthy[118624]: --> All data devices are unavailable
Jan 10 12:03:55 np0005580781 systemd[1]: libpod-edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a.scope: Deactivated successfully.
Jan 10 12:03:55 np0005580781 podman[118577]: 2026-01-10 17:03:55.12334318 +0000 UTC m=+0.776016197 container died edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:03:55 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ccac82e9f465127507038ba40c47bd62473c5475287a6b5dbb2bd9d3c922353e-merged.mount: Deactivated successfully.
Jan 10 12:03:55 np0005580781 podman[118577]: 2026-01-10 17:03:55.165615924 +0000 UTC m=+0.818288891 container remove edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 12:03:55 np0005580781 systemd[1]: libpod-conmon-edc343e1202b199fdec98c6c1d3c28de97663cf10eb2e95d9f4f458eea84b37a.scope: Deactivated successfully.
Jan 10 12:03:55 np0005580781 python3.9[118796]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:55 np0005580781 podman[118930]: 2026-01-10 17:03:55.59635692 +0000 UTC m=+0.051100705 container create 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 12:03:55 np0005580781 systemd[1]: Started libpod-conmon-4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01.scope.
Jan 10 12:03:55 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:03:55 np0005580781 podman[118930]: 2026-01-10 17:03:55.578548107 +0000 UTC m=+0.033291902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:03:55 np0005580781 podman[118930]: 2026-01-10 17:03:55.679043225 +0000 UTC m=+0.133787090 container init 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:03:55 np0005580781 podman[118930]: 2026-01-10 17:03:55.685643011 +0000 UTC m=+0.140386796 container start 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:03:55 np0005580781 podman[118930]: 2026-01-10 17:03:55.688957415 +0000 UTC m=+0.143701290 container attach 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 12:03:55 np0005580781 keen_elion[118965]: 167 167
Jan 10 12:03:55 np0005580781 systemd[1]: libpod-4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01.scope: Deactivated successfully.
Jan 10 12:03:55 np0005580781 podman[118930]: 2026-01-10 17:03:55.69233295 +0000 UTC m=+0.147076735 container died 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:03:55 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1875434d74bb55cb290a601a65aff9926edcc9d66d3058b5c21e58041b0e0c64-merged.mount: Deactivated successfully.
Jan 10 12:03:55 np0005580781 podman[118930]: 2026-01-10 17:03:55.729949083 +0000 UTC m=+0.184692858 container remove 4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_elion, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:03:55 np0005580781 systemd[1]: libpod-conmon-4ded34f52806b039675600fe2b8c8074723db58ec2398bdba7452f93763e1c01.scope: Deactivated successfully.
Jan 10 12:03:55 np0005580781 python3.9[118962]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:55 np0005580781 podman[118994]: 2026-01-10 17:03:55.895760536 +0000 UTC m=+0.055320744 container create a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:03:55 np0005580781 systemd[1]: Started libpod-conmon-a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82.scope.
Jan 10 12:03:55 np0005580781 podman[118994]: 2026-01-10 17:03:55.873634231 +0000 UTC m=+0.033194469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:03:55 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:03:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:55 np0005580781 podman[118994]: 2026-01-10 17:03:55.984525543 +0000 UTC m=+0.144085801 container init a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 12:03:55 np0005580781 podman[118994]: 2026-01-10 17:03:55.993235979 +0000 UTC m=+0.152796187 container start a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 12:03:55 np0005580781 podman[118994]: 2026-01-10 17:03:55.996859151 +0000 UTC m=+0.156419369 container attach a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:03:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]: {
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:    "0": [
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:        {
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "devices": [
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "/dev/loop3"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            ],
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_name": "ceph_lv0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_size": "21470642176",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "name": "ceph_lv0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "tags": {
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cluster_name": "ceph",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.crush_device_class": "",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.encrypted": "0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.objectstore": "bluestore",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osd_id": "0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.type": "block",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.vdo": "0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.with_tpm": "0"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            },
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "type": "block",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "vg_name": "ceph_vg0"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:        }
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:    ],
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:    "1": [
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:        {
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "devices": [
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "/dev/loop4"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            ],
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_name": "ceph_lv1",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_size": "21470642176",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "name": "ceph_lv1",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "tags": {
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cluster_name": "ceph",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.crush_device_class": "",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.encrypted": "0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.objectstore": "bluestore",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osd_id": "1",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.type": "block",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.vdo": "0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.with_tpm": "0"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            },
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "type": "block",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "vg_name": "ceph_vg1"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:        }
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:    ],
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:    "2": [
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:        {
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "devices": [
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "/dev/loop5"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            ],
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_name": "ceph_lv2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_size": "21470642176",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "name": "ceph_lv2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "tags": {
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.cluster_name": "ceph",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.crush_device_class": "",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.encrypted": "0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.objectstore": "bluestore",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osd_id": "2",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.type": "block",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.vdo": "0",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:                "ceph.with_tpm": "0"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            },
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "type": "block",
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:            "vg_name": "ceph_vg2"
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:        }
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]:    ]
Jan 10 12:03:56 np0005580781 dazzling_moore[119039]: }
Jan 10 12:03:56 np0005580781 podman[118994]: 2026-01-10 17:03:56.332056238 +0000 UTC m=+0.491616446 container died a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:03:56 np0005580781 systemd[1]: libpod-a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82.scope: Deactivated successfully.
Jan 10 12:03:56 np0005580781 systemd[1]: var-lib-containers-storage-overlay-bc736a9cc8fadc85bdc3cea407da97f31d3d97e7c561a77481110863eba93e1c-merged.mount: Deactivated successfully.
Jan 10 12:03:56 np0005580781 podman[118994]: 2026-01-10 17:03:56.384127869 +0000 UTC m=+0.543688087 container remove a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_moore, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:03:56 np0005580781 systemd[1]: libpod-conmon-a46152d554718048e1afa6b4f88f3177274d6cb395624b650fe9b1a67d5abc82.scope: Deactivated successfully.
Jan 10 12:03:56 np0005580781 python3.9[119167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:56 np0005580781 podman[119315]: 2026-01-10 17:03:56.827474671 +0000 UTC m=+0.052782842 container create 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 12:03:56 np0005580781 systemd[1]: Started libpod-conmon-71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db.scope.
Jan 10 12:03:56 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:03:56 np0005580781 podman[119315]: 2026-01-10 17:03:56.883261876 +0000 UTC m=+0.108570057 container init 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:03:56 np0005580781 podman[119315]: 2026-01-10 17:03:56.889977666 +0000 UTC m=+0.115285817 container start 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 12:03:56 np0005580781 podman[119315]: 2026-01-10 17:03:56.800444817 +0000 UTC m=+0.025753028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:03:56 np0005580781 podman[119315]: 2026-01-10 17:03:56.893282259 +0000 UTC m=+0.118590440 container attach 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:03:56 np0005580781 loving_gates[119333]: 167 167
Jan 10 12:03:56 np0005580781 systemd[1]: libpod-71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db.scope: Deactivated successfully.
Jan 10 12:03:56 np0005580781 podman[119315]: 2026-01-10 17:03:56.895622995 +0000 UTC m=+0.120931146 container died 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:03:56 np0005580781 systemd[1]: var-lib-containers-storage-overlay-641f6c0eaecf820e63e874560af30da702074cfc860acf6997689525a6b99435-merged.mount: Deactivated successfully.
Jan 10 12:03:56 np0005580781 podman[119315]: 2026-01-10 17:03:56.934128663 +0000 UTC m=+0.159436814 container remove 71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 12:03:56 np0005580781 systemd[1]: libpod-conmon-71aad2d36c6f9ff22791b75c26b68fd46d3ec17a662b4f85217012c8bd9f11db.scope: Deactivated successfully.
Jan 10 12:03:56 np0005580781 python3.9[119318]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:57 np0005580781 podman[119381]: 2026-01-10 17:03:57.112293385 +0000 UTC m=+0.044733355 container create a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 12:03:57 np0005580781 systemd[1]: Started libpod-conmon-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope.
Jan 10 12:03:57 np0005580781 podman[119381]: 2026-01-10 17:03:57.091676813 +0000 UTC m=+0.024116783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:03:57 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:03:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:57 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:03:57 np0005580781 podman[119381]: 2026-01-10 17:03:57.209422158 +0000 UTC m=+0.141862128 container init a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:03:57 np0005580781 podman[119381]: 2026-01-10 17:03:57.216130568 +0000 UTC m=+0.148570518 container start a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 10 12:03:57 np0005580781 podman[119381]: 2026-01-10 17:03:57.220341497 +0000 UTC m=+0.152781457 container attach a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 12:03:57 np0005580781 python3.9[119547]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:03:57 np0005580781 lvm[119647]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:03:57 np0005580781 lvm[119647]: VG ceph_vg0 finished
Jan 10 12:03:57 np0005580781 lvm[119649]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:03:57 np0005580781 lvm[119649]: VG ceph_vg1 finished
Jan 10 12:03:57 np0005580781 lvm[119658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:03:57 np0005580781 lvm[119658]: VG ceph_vg2 finished
Jan 10 12:03:58 np0005580781 sweet_shtern[119421]: {}
Jan 10 12:03:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:03:58 np0005580781 systemd[1]: libpod-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope: Deactivated successfully.
Jan 10 12:03:58 np0005580781 podman[119381]: 2026-01-10 17:03:58.107095482 +0000 UTC m=+1.039535472 container died a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:03:58 np0005580781 systemd[1]: libpod-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope: Consumed 1.410s CPU time.
Jan 10 12:03:58 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2e84e7d4cddca5c5e7dfbca7d3d72b6ed1adb955ee45280d70ce0fe84c76b279-merged.mount: Deactivated successfully.
Jan 10 12:03:58 np0005580781 podman[119381]: 2026-01-10 17:03:58.16615729 +0000 UTC m=+1.098597230 container remove a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 12:03:58 np0005580781 systemd[1]: libpod-conmon-a4ab2e6b25296b667f5b725c9d70580b193a5ef803488b1085ad0667572454c6.scope: Deactivated successfully.
Jan 10 12:03:58 np0005580781 python3.9[119687]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:03:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:03:58 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:03:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:03:58 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:03:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:03:59 np0005580781 python3.9[119877]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:03:59 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:03:59 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:03:59 np0005580781 python3.9[120032]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:00 np0005580781 python3.9[120184]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:01 np0005580781 python3.9[120336]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:02 np0005580781 python3.9[120488]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 12:04:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:02 np0005580781 python3.9[120640]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 10 12:04:03 np0005580781 systemd[1]: session-40.scope: Deactivated successfully.
Jan 10 12:04:03 np0005580781 systemd[1]: session-40.scope: Consumed 30.739s CPU time.
Jan 10 12:04:03 np0005580781 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Jan 10 12:04:03 np0005580781 systemd-logind[798]: Removed session 40.
Jan 10 12:04:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:08 np0005580781 systemd-logind[798]: New session 41 of user zuul.
Jan 10 12:04:08 np0005580781 systemd[1]: Started Session 41 of User zuul.
Jan 10 12:04:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:04:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:04:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:04:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:04:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:04:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:04:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:09 np0005580781 python3.9[120822]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 10 12:04:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:10 np0005580781 python3.9[120974]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:04:11 np0005580781 python3.9[121128]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 10 12:04:11 np0005580781 python3.9[121280]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.prj1z4_9 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:04:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:12 np0005580781 python3.9[121405]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.prj1z4_9 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064651.2719548-44-146808587901152/.source.prj1z4_9 _original_basename=.ucz4gksr follow=False checksum=c16efaf3fcf3c55e6b76526c00ad8db14a29321c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:13 np0005580781 python3.9[121557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:04:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:14 np0005580781 python3.9[121709]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLbf1u7QZKIo5G+YWiNhcXI+Bt6YV4GfE/ux3dizYMgWBt9o+PmlYYMiVREbRw0Bbw1ytXXbF5+nj3Xb2CXI8ussGl0WspjKSeiZ6iZLcZTiCJLgJ/9hsvwXR//dQk9MHjPU21/f9Bmm5bXO7JD6wyeZ6BhNNSRil+tMQ9dtlaRlLoSzr5CXtKSgvp0EnFO/wO0yIjn5vj0Kg53pKe6PklqqbDKQe4B3RTSjCo711H66GqFuA0OZDkpKEVqdQFy9HUPAxgflwamxh1bRZYQ4oZ+sRK0y7Aau5nyIxefmh+nrgkwpuGnfu/PBcFHlgDpGdK5SR2MN7oUwfJtJl+qp1MFaUz+TRF7THXK8e6MCD0RPGfqlim6D6qGfKkbBYM50kTncYakPtGOrLbf/hARiTSEduglbNBYv0vatpv1emwjOPwkAu3DZdOi4PokhOq+BnOnG95UH3ZzOWO+UnNEiCQgCu7NbzJOFb/KoBU8XRT1o8yPWdpwQ+mKGFE1PGsA7k=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVw/TzKh+QQYsI9HFUl2xKC/Iozkh6C2Rlm1r7qShYC#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHuUq5M0wkVhsnk90cNjQOZixGqQR1X/PXyTQuPIQfBmEkOk4KlPkJk1al+bzULcCOXjdbnilDQbL6yRpQlhrU=#012 create=True mode=0644 path=/tmp/ansible.prj1z4_9 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:15 np0005580781 python3.9[121861]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.prj1z4_9' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:04:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:16 np0005580781 python3.9[122015]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.prj1z4_9 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:16 np0005580781 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 10 12:04:16 np0005580781 systemd[1]: session-41.scope: Deactivated successfully.
Jan 10 12:04:16 np0005580781 systemd[1]: session-41.scope: Consumed 5.463s CPU time.
Jan 10 12:04:16 np0005580781 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Jan 10 12:04:16 np0005580781 systemd-logind[798]: Removed session 41.
Jan 10 12:04:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:21 np0005580781 systemd-logind[798]: New session 42 of user zuul.
Jan 10 12:04:21 np0005580781 systemd[1]: Started Session 42 of User zuul.
Jan 10 12:04:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:22 np0005580781 python3.9[122195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:04:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:24 np0005580781 python3.9[122351]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 10 12:04:25 np0005580781 python3.9[122505]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:04:25 np0005580781 python3.9[122658]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:04:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:26 np0005580781 python3.9[122811]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:04:27 np0005580781 python3.9[122963]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:27 np0005580781 systemd[1]: session-42.scope: Deactivated successfully.
Jan 10 12:04:27 np0005580781 systemd[1]: session-42.scope: Consumed 4.146s CPU time.
Jan 10 12:04:27 np0005580781 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Jan 10 12:04:27 np0005580781 systemd-logind[798]: Removed session 42.
Jan 10 12:04:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:31 np0005580781 systemd[1]: session-18.scope: Deactivated successfully.
Jan 10 12:04:31 np0005580781 systemd[1]: session-18.scope: Consumed 1min 55.737s CPU time.
Jan 10 12:04:31 np0005580781 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Jan 10 12:04:31 np0005580781 systemd-logind[798]: Removed session 18.
Jan 10 12:04:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:32 np0005580781 systemd-logind[798]: New session 43 of user zuul.
Jan 10 12:04:32 np0005580781 systemd[1]: Started Session 43 of User zuul.
Jan 10 12:04:33 np0005580781 python3.9[123141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:04:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:35 np0005580781 python3.9[123297]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:04:35 np0005580781 python3.9[123381]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 10 12:04:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:38 np0005580781 python3.9[123532]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:04:38
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'images']
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:04:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:04:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:04:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:39 np0005580781 python3.9[123683]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 12:04:40 np0005580781 python3.9[123833]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:04:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:40 np0005580781 python3.9[123983]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:04:41 np0005580781 systemd[1]: session-43.scope: Deactivated successfully.
Jan 10 12:04:41 np0005580781 systemd[1]: session-43.scope: Consumed 6.145s CPU time.
Jan 10 12:04:41 np0005580781 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Jan 10 12:04:41 np0005580781 systemd-logind[798]: Removed session 43.
Jan 10 12:04:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:04:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:04:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:46 np0005580781 systemd-logind[798]: New session 44 of user zuul.
Jan 10 12:04:46 np0005580781 systemd[1]: Started Session 44 of User zuul.
Jan 10 12:04:47 np0005580781 python3.9[124162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:04:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:49 np0005580781 python3.9[124318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:04:49 np0005580781 python3.9[124470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:04:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:50 np0005580781 python3.9[124622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:04:51 np0005580781 python3.9[124745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064690.1058009-60-195841109043771/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e485d678466a488a60f8e482454471a355c36f72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:52 np0005580781 python3.9[124897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:04:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:52 np0005580781 python3.9[125020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064691.6447883-60-248199502016995/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ca038f02567930da0b541567198b9dabc46ea4df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:53 np0005580781 python3.9[125172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:04:53 np0005580781 python3.9[125295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064692.8077638-60-127709154005490/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=41ab701d91a5d2c0623e2f0f9a873502cb129bb7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:54 np0005580781 python3.9[125447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:04:55 np0005580781 python3.9[125599]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:04:55 np0005580781 python3.9[125751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:04:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:56 np0005580781 python3.9[125874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064695.4554915-119-262832685100687/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b6d15fd162bf0e10fa7a56e0e8f7a485557793ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:57 np0005580781 python3.9[126026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:04:57 np0005580781 python3.9[126149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064696.6584558-119-2037805664209/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3b8b82f07f1ef991370ee1a21f059d8a61d3668d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:04:58 np0005580781 python3.9[126301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:04:59 np0005580781 python3.9[126488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064697.932487-119-244265231215869/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7af582baab9c3f815fac6ee51c17b6b6c5772501 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:04:59 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:04:59 np0005580781 podman[126670]: 2026-01-10 17:04:59.500609251 +0000 UTC m=+0.046702024 container create 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:04:59 np0005580781 systemd[1]: Started libpod-conmon-0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446.scope.
Jan 10 12:04:59 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:04:59 np0005580781 podman[126670]: 2026-01-10 17:04:59.481441238 +0000 UTC m=+0.027534051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:04:59 np0005580781 podman[126670]: 2026-01-10 17:04:59.57785682 +0000 UTC m=+0.123949613 container init 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:04:59 np0005580781 podman[126670]: 2026-01-10 17:04:59.584596181 +0000 UTC m=+0.130688954 container start 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 12:04:59 np0005580781 podman[126670]: 2026-01-10 17:04:59.587488193 +0000 UTC m=+0.133581006 container attach 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 12:04:59 np0005580781 hardcore_satoshi[126734]: 167 167
Jan 10 12:04:59 np0005580781 systemd[1]: libpod-0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446.scope: Deactivated successfully.
Jan 10 12:04:59 np0005580781 podman[126670]: 2026-01-10 17:04:59.591007183 +0000 UTC m=+0.137099966 container died 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:04:59 np0005580781 systemd[1]: var-lib-containers-storage-overlay-63704ade634d76d74647a926a86b2af5ae27d36694ab174f0a404e3fabcc32d6-merged.mount: Deactivated successfully.
Jan 10 12:04:59 np0005580781 podman[126670]: 2026-01-10 17:04:59.627338452 +0000 UTC m=+0.173431225 container remove 0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 12:04:59 np0005580781 systemd[1]: libpod-conmon-0e62641bad7ae79a6655938670a84e504bb5c24421a414b0b47056e42da66446.scope: Deactivated successfully.
Jan 10 12:04:59 np0005580781 python3.9[126738]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:04:59 np0005580781 podman[126761]: 2026-01-10 17:04:59.803503324 +0000 UTC m=+0.045274014 container create 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:04:59 np0005580781 systemd[1]: Started libpod-conmon-2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62.scope.
Jan 10 12:04:59 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:04:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:04:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:04:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:04:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:04:59 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:04:59 np0005580781 podman[126761]: 2026-01-10 17:04:59.780999197 +0000 UTC m=+0.022769887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:04:59 np0005580781 podman[126761]: 2026-01-10 17:04:59.884295674 +0000 UTC m=+0.126066354 container init 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:04:59 np0005580781 podman[126761]: 2026-01-10 17:04:59.892757143 +0000 UTC m=+0.134527843 container start 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:04:59 np0005580781 podman[126761]: 2026-01-10 17:04:59.897619811 +0000 UTC m=+0.139390581 container attach 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:05:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:00 np0005580781 python3.9[126938]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:00 np0005580781 interesting_goodall[126799]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:05:00 np0005580781 interesting_goodall[126799]: --> All data devices are unavailable
Jan 10 12:05:00 np0005580781 systemd[1]: libpod-2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62.scope: Deactivated successfully.
Jan 10 12:05:00 np0005580781 podman[126761]: 2026-01-10 17:05:00.490572573 +0000 UTC m=+0.732343263 container died 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:05:00 np0005580781 systemd[1]: var-lib-containers-storage-overlay-98f1676975d35dde4cf215a150bf41ee1120f42cd39ecfaf242ac2985c7b9732-merged.mount: Deactivated successfully.
Jan 10 12:05:00 np0005580781 podman[126761]: 2026-01-10 17:05:00.541459455 +0000 UTC m=+0.783230125 container remove 2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goodall, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 12:05:00 np0005580781 systemd[1]: libpod-conmon-2d29d0d6fdae40e18e57b49b2551c89c5e1c81addb82d4b79bd68c7a10b79d62.scope: Deactivated successfully.
Jan 10 12:05:01 np0005580781 python3.9[127164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:01 np0005580781 podman[127176]: 2026-01-10 17:05:01.037000847 +0000 UTC m=+0.063741407 container create c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:05:01 np0005580781 systemd[1]: Started libpod-conmon-c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8.scope.
Jan 10 12:05:01 np0005580781 podman[127176]: 2026-01-10 17:05:01.009446346 +0000 UTC m=+0.036186996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:05:01 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:05:01 np0005580781 podman[127176]: 2026-01-10 17:05:01.136246299 +0000 UTC m=+0.162986959 container init c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:05:01 np0005580781 podman[127176]: 2026-01-10 17:05:01.144017199 +0000 UTC m=+0.170757769 container start c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:05:01 np0005580781 podman[127176]: 2026-01-10 17:05:01.148125336 +0000 UTC m=+0.174865976 container attach c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:05:01 np0005580781 agitated_shamir[127193]: 167 167
Jan 10 12:05:01 np0005580781 systemd[1]: libpod-c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8.scope: Deactivated successfully.
Jan 10 12:05:01 np0005580781 podman[127176]: 2026-01-10 17:05:01.150987457 +0000 UTC m=+0.177728007 container died c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 12:05:01 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3e06dd0c099695b8c2848401ed8c9c74708ac80e34bcafda5b02492e5ec2e7bb-merged.mount: Deactivated successfully.
Jan 10 12:05:01 np0005580781 podman[127176]: 2026-01-10 17:05:01.189145038 +0000 UTC m=+0.215885578 container remove c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 12:05:01 np0005580781 systemd[1]: libpod-conmon-c8ddbd3e2610bd937c8c7554ec562a815742fc2e4c0abc4f71e52e3359cbfaf8.scope: Deactivated successfully.
Jan 10 12:05:01 np0005580781 podman[127264]: 2026-01-10 17:05:01.33672554 +0000 UTC m=+0.040595031 container create a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:05:01 np0005580781 systemd[1]: Started libpod-conmon-a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1.scope.
Jan 10 12:05:01 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:05:01 np0005580781 podman[127264]: 2026-01-10 17:05:01.31801173 +0000 UTC m=+0.021881271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:05:01 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:01 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:01 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:01 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:01 np0005580781 podman[127264]: 2026-01-10 17:05:01.436974401 +0000 UTC m=+0.140843922 container init a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:05:01 np0005580781 podman[127264]: 2026-01-10 17:05:01.444782322 +0000 UTC m=+0.148651813 container start a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:05:01 np0005580781 podman[127264]: 2026-01-10 17:05:01.447934371 +0000 UTC m=+0.151803872 container attach a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:05:01 np0005580781 python3.9[127361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064700.563052-178-107697408756/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=96f3925768934c7395b739536fa8f7b4d1baf946 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:01 np0005580781 competent_jemison[127320]: {
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:    "0": [
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:        {
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "devices": [
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "/dev/loop3"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            ],
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_name": "ceph_lv0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_size": "21470642176",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "name": "ceph_lv0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "tags": {
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cluster_name": "ceph",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.crush_device_class": "",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.encrypted": "0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.objectstore": "bluestore",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osd_id": "0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.type": "block",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.vdo": "0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.with_tpm": "0"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            },
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "type": "block",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "vg_name": "ceph_vg0"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:        }
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:    ],
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:    "1": [
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:        {
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "devices": [
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "/dev/loop4"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            ],
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_name": "ceph_lv1",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_size": "21470642176",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "name": "ceph_lv1",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "tags": {
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cluster_name": "ceph",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.crush_device_class": "",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.encrypted": "0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.objectstore": "bluestore",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osd_id": "1",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.type": "block",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.vdo": "0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.with_tpm": "0"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            },
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "type": "block",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "vg_name": "ceph_vg1"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:        }
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:    ],
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:    "2": [
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:        {
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "devices": [
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "/dev/loop5"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            ],
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_name": "ceph_lv2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_size": "21470642176",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "name": "ceph_lv2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "tags": {
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.cluster_name": "ceph",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.crush_device_class": "",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.encrypted": "0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.objectstore": "bluestore",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osd_id": "2",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.type": "block",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.vdo": "0",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:                "ceph.with_tpm": "0"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            },
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "type": "block",
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:            "vg_name": "ceph_vg2"
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:        }
Jan 10 12:05:01 np0005580781 competent_jemison[127320]:    ]
Jan 10 12:05:01 np0005580781 competent_jemison[127320]: }
Jan 10 12:05:01 np0005580781 systemd[1]: libpod-a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1.scope: Deactivated successfully.
Jan 10 12:05:01 np0005580781 podman[127264]: 2026-01-10 17:05:01.774766943 +0000 UTC m=+0.478636444 container died a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 12:05:01 np0005580781 systemd[1]: var-lib-containers-storage-overlay-331ebb8ba491a49325cce48d983ee6b56f2d97997da2eb68a9de9f6a97007762-merged.mount: Deactivated successfully.
Jan 10 12:05:01 np0005580781 podman[127264]: 2026-01-10 17:05:01.820871059 +0000 UTC m=+0.524740560 container remove a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 12:05:01 np0005580781 systemd[1]: libpod-conmon-a1e5ff5dcef0b08fbf5c0e9e27764fa5821dbe537b2a552104e32039d3b18db1.scope: Deactivated successfully.
Jan 10 12:05:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:02 np0005580781 python3.9[127579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:02 np0005580781 podman[127593]: 2026-01-10 17:05:02.315272229 +0000 UTC m=+0.081442399 container create df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:05:02 np0005580781 systemd[1]: Started libpod-conmon-df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c.scope.
Jan 10 12:05:02 np0005580781 podman[127593]: 2026-01-10 17:05:02.280210075 +0000 UTC m=+0.046380325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:05:02 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:05:02 np0005580781 podman[127593]: 2026-01-10 17:05:02.402897922 +0000 UTC m=+0.169068082 container init df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:05:02 np0005580781 podman[127593]: 2026-01-10 17:05:02.409460228 +0000 UTC m=+0.175630388 container start df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 12:05:02 np0005580781 podman[127593]: 2026-01-10 17:05:02.412956877 +0000 UTC m=+0.179127037 container attach df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:05:02 np0005580781 competent_cray[127616]: 167 167
Jan 10 12:05:02 np0005580781 systemd[1]: libpod-df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c.scope: Deactivated successfully.
Jan 10 12:05:02 np0005580781 podman[127593]: 2026-01-10 17:05:02.415006895 +0000 UTC m=+0.181177045 container died df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 12:05:02 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1bd30b322662a431d83068439c6aaf28b83e591affb3c6e4d4394564f0f0a93b-merged.mount: Deactivated successfully.
Jan 10 12:05:02 np0005580781 podman[127593]: 2026-01-10 17:05:02.450639884 +0000 UTC m=+0.216810044 container remove df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cray, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 12:05:02 np0005580781 systemd[1]: libpod-conmon-df57d0067d6342c971e8789c4ff80d52a7ccb4a40b0149254f26591c765e928c.scope: Deactivated successfully.
Jan 10 12:05:02 np0005580781 podman[127708]: 2026-01-10 17:05:02.623332788 +0000 UTC m=+0.062026729 container create 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:05:02 np0005580781 systemd[1]: Started libpod-conmon-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope.
Jan 10 12:05:02 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:05:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:02 np0005580781 podman[127708]: 2026-01-10 17:05:02.602907379 +0000 UTC m=+0.041601370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:05:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:05:02 np0005580781 podman[127708]: 2026-01-10 17:05:02.709800468 +0000 UTC m=+0.148494459 container init 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:05:02 np0005580781 podman[127708]: 2026-01-10 17:05:02.723236609 +0000 UTC m=+0.161930600 container start 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:05:02 np0005580781 podman[127708]: 2026-01-10 17:05:02.72753293 +0000 UTC m=+0.166226891 container attach 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 12:05:02 np0005580781 python3.9[127776]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064701.8219924-178-257598700437892/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3b8b82f07f1ef991370ee1a21f059d8a61d3668d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:03 np0005580781 python3.9[127981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:03 np0005580781 lvm[128005]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:05:03 np0005580781 lvm[128005]: VG ceph_vg1 finished
Jan 10 12:05:03 np0005580781 lvm[128002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:05:03 np0005580781 lvm[128002]: VG ceph_vg0 finished
Jan 10 12:05:03 np0005580781 lvm[128007]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:05:03 np0005580781 lvm[128007]: VG ceph_vg2 finished
Jan 10 12:05:03 np0005580781 ecstatic_cartwright[127772]: {}
Jan 10 12:05:03 np0005580781 systemd[1]: libpod-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope: Deactivated successfully.
Jan 10 12:05:03 np0005580781 systemd[1]: libpod-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope: Consumed 1.347s CPU time.
Jan 10 12:05:03 np0005580781 podman[128057]: 2026-01-10 17:05:03.629329923 +0000 UTC m=+0.025560965 container died 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 12:05:03 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0a515995dd6c066f3c03a6897f7d13d9a8c7ae3ba67e5527d1b298aff9608f25-merged.mount: Deactivated successfully.
Jan 10 12:05:03 np0005580781 podman[128057]: 2026-01-10 17:05:03.671311313 +0000 UTC m=+0.067542335 container remove 79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 12:05:03 np0005580781 systemd[1]: libpod-conmon-79a1f67cbe7a3604e9f1f02a1ce39db41c1340de9a107f382d94cc02c9d3eb46.scope: Deactivated successfully.
Jan 10 12:05:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:05:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:05:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:05:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:05:04 np0005580781 python3.9[128171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064703.0010564-178-242534940115453/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c56b196327ab38c96598f9582974c28b6e44c1a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:05:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:05:05 np0005580781 python3.9[128324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:06 np0005580781 python3.9[128476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:06 np0005580781 python3.9[128599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064705.5200217-246-103613909924887/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:07 np0005580781 python3.9[128751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:08 np0005580781 python3.9[128903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:08 np0005580781 python3.9[129026]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064707.7797709-270-99606809918452/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:05:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:05:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:05:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:05:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:05:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:05:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:09 np0005580781 python3.9[129178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:10 np0005580781 python3.9[129330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:10 np0005580781 python3.9[129453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064709.690493-294-255146557262320/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:11 np0005580781 python3.9[129605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:12 np0005580781 python3.9[129757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:12 np0005580781 python3.9[129880]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064711.6718209-318-149066320588620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:13 np0005580781 python3.9[130032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:13 np0005580781 python3.9[130184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:14 np0005580781 python3.9[130307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064713.4758482-342-182869410645144/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:15 np0005580781 python3.9[130459]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:15 np0005580781 python3.9[130611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:16 np0005580781 python3.9[130734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064715.3562958-366-253020009294174/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1c1aa104eb1736f59ba6477b43a84ef8e828e0b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:16 np0005580781 systemd[1]: session-44.scope: Deactivated successfully.
Jan 10 12:05:16 np0005580781 systemd[1]: session-44.scope: Consumed 23.937s CPU time.
Jan 10 12:05:16 np0005580781 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Jan 10 12:05:16 np0005580781 systemd-logind[798]: Removed session 44.
Jan 10 12:05:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.275194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718275464, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6689, "num_deletes": 251, "total_data_size": 7852383, "memory_usage": 8005440, "flush_reason": "Manual Compaction"}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718322565, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5822486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 6832, "table_properties": {"data_size": 5798706, "index_size": 15474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7237, "raw_key_size": 63657, "raw_average_key_size": 22, "raw_value_size": 5743790, "raw_average_value_size": 2002, "num_data_blocks": 693, "num_entries": 2869, "num_filter_entries": 2869, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064238, "oldest_key_time": 1768064238, "file_creation_time": 1768064718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 47385 microseconds, and 18567 cpu microseconds.
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.322648) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5822486 bytes OK
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.322728) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.324977) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.325011) EVENT_LOG_v1 {"time_micros": 1768064718325005, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.325070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 7824041, prev total WAL file size 7824041, number of live WAL files 2.
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.327318) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5686KB) 13(58KB) 8(1944B)]
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718327647, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5884390, "oldest_snapshot_seqno": -1}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2695 keys, 5837320 bytes, temperature: kUnknown
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718385050, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5837320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5813885, "index_size": 15582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 62097, "raw_average_key_size": 23, "raw_value_size": 5760310, "raw_average_value_size": 2137, "num_data_blocks": 698, "num_entries": 2695, "num_filter_entries": 2695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768064718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.385565) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5837320 bytes
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.387059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.4 rd, 101.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.6, 0.0 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2984, records dropped: 289 output_compression: NoCompression
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.387081) EVENT_LOG_v1 {"time_micros": 1768064718387068, "job": 4, "event": "compaction_finished", "compaction_time_micros": 57461, "compaction_time_cpu_micros": 30019, "output_level": 6, "num_output_files": 1, "total_output_size": 5837320, "num_input_records": 2984, "num_output_records": 2695, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718388783, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718388874, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064718388958, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 10 12:05:18 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:05:18.326903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:05:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:22 np0005580781 systemd-logind[798]: New session 45 of user zuul.
Jan 10 12:05:22 np0005580781 systemd[1]: Started Session 45 of User zuul.
Jan 10 12:05:23 np0005580781 python3.9[130915]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:24 np0005580781 python3.9[131067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:24 np0005580781 python3.9[131190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064723.5338562-29-24083593284221/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=7cc641ddc3c198361b04b7e13e353930d285d63f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:25 np0005580781 python3.9[131342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:26 np0005580781 python3.9[131465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064725.1008286-29-266859983038931/.source.conf _original_basename=ceph.conf follow=False checksum=212a91a5c6ea3008fced76612c32c83bbed76d72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:26 np0005580781 systemd[1]: session-45.scope: Deactivated successfully.
Jan 10 12:05:26 np0005580781 systemd[1]: session-45.scope: Consumed 2.882s CPU time.
Jan 10 12:05:26 np0005580781 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Jan 10 12:05:26 np0005580781 systemd-logind[798]: Removed session 45.
Jan 10 12:05:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:31 np0005580781 systemd-logind[798]: New session 46 of user zuul.
Jan 10 12:05:31 np0005580781 systemd[1]: Started Session 46 of User zuul.
Jan 10 12:05:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:33 np0005580781 python3.9[131643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:05:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:34 np0005580781 python3.9[131799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:35 np0005580781 python3.9[131951]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:05:35 np0005580781 python3.9[132101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:05:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:36 np0005580781 python3.9[132253]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:05:38
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'images']
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:38 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:05:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:05:39 np0005580781 python3.9[132409]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:05:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:05:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:39 np0005580781 python3.9[132493]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:05:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:42 np0005580781 python3.9[132646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:05:43 np0005580781 python3[132801]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 10 12:05:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:44 np0005580781 python3.9[132953]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:05:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:05:45 np0005580781 python3.9[133105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:45 np0005580781 python3.9[133183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:46 np0005580781 python3.9[133335]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:46 np0005580781 python3.9[133413]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5492imh7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:47 np0005580781 python3.9[133565]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:47 np0005580781 python3.9[133643]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:48 np0005580781 python3.9[133795]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:05:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:49 np0005580781 python3[133950]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 12:05:50 np0005580781 python3.9[134102]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:50 np0005580781 python3.9[134227]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064749.5778897-152-125728905625632/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:51 np0005580781 python3.9[134379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:52 np0005580781 python3.9[134504]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064751.04687-167-144701447996540/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:52 np0005580781 python3.9[134656]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:53 np0005580781 python3.9[134781]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064752.4682178-182-233430569845012/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:54 np0005580781 python3.9[134933]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:54 np0005580781 python3.9[135058]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064753.670583-197-60708047697570/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:55 np0005580781 python3.9[135210]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:05:56 np0005580781 python3.9[135335]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768064754.9803271-212-31590854392819/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:56 np0005580781 python3.9[135487]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:57 np0005580781 python3.9[135639]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:05:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:05:58 np0005580781 python3.9[135794]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:05:58 np0005580781 python3.9[135946]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:05:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:05:59 np0005580781 python3.9[136099]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:06:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:00 np0005580781 python3.9[136253]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:06:01 np0005580781 python3.9[136408]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:02 np0005580781 python3.9[136558]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:06:03 np0005580781 python3.9[136711]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:9d:bd:06:c0" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:06:03 np0005580781 ovs-vsctl[136712]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:9d:bd:06:c0 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 10 12:06:04 np0005580781 python3.9[136864]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:06:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:06:04 np0005580781 python3.9[137097]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:06:04 np0005580781 ovs-vsctl[137140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 10 12:06:04 np0005580781 podman[137242]: 2026-01-10 17:06:04.959325158 +0000 UTC m=+0.034802595 container create fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:05 np0005580781 systemd[1]: Started libpod-conmon-fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd.scope.
Jan 10 12:06:05 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:06:05 np0005580781 podman[137242]: 2026-01-10 17:06:04.944553663 +0000 UTC m=+0.020031120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:06:05 np0005580781 podman[137242]: 2026-01-10 17:06:05.051059403 +0000 UTC m=+0.126536860 container init fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:06:05 np0005580781 podman[137242]: 2026-01-10 17:06:05.057387087 +0000 UTC m=+0.132864524 container start fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:06:05 np0005580781 podman[137242]: 2026-01-10 17:06:05.060237845 +0000 UTC m=+0.135715362 container attach fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 12:06:05 np0005580781 focused_murdock[137289]: 167 167
Jan 10 12:06:05 np0005580781 systemd[1]: libpod-fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd.scope: Deactivated successfully.
Jan 10 12:06:05 np0005580781 podman[137242]: 2026-01-10 17:06:05.063968627 +0000 UTC m=+0.139446084 container died fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:06:05 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4e4b7d23fb7ec8f404bc43782a1727de2042d10d9b073e6d6835b9b5937243a1-merged.mount: Deactivated successfully.
Jan 10 12:06:05 np0005580781 podman[137242]: 2026-01-10 17:06:05.102632317 +0000 UTC m=+0.178109754 container remove fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_murdock, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:05 np0005580781 systemd[1]: libpod-conmon-fba20398cdfacfe3b4a3588d841dbf5866387864e1265dbf64ec82f3bf545fbd.scope: Deactivated successfully.
Jan 10 12:06:05 np0005580781 podman[137355]: 2026-01-10 17:06:05.260057593 +0000 UTC m=+0.045099087 container create 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:05 np0005580781 python3.9[137344]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:06:05 np0005580781 systemd[1]: Started libpod-conmon-0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f.scope.
Jan 10 12:06:05 np0005580781 podman[137355]: 2026-01-10 17:06:05.239912731 +0000 UTC m=+0.024954245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:06:05 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:06:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:05 np0005580781 podman[137355]: 2026-01-10 17:06:05.372056144 +0000 UTC m=+0.157097638 container init 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:05 np0005580781 podman[137355]: 2026-01-10 17:06:05.380768013 +0000 UTC m=+0.165809507 container start 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:05 np0005580781 podman[137355]: 2026-01-10 17:06:05.384062163 +0000 UTC m=+0.169103657 container attach 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:06:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:06:05 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:06:05 np0005580781 keen_rhodes[137374]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:06:05 np0005580781 keen_rhodes[137374]: --> All data devices are unavailable
Jan 10 12:06:05 np0005580781 systemd[1]: libpod-0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f.scope: Deactivated successfully.
Jan 10 12:06:05 np0005580781 podman[137355]: 2026-01-10 17:06:05.958313657 +0000 UTC m=+0.743355161 container died 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:05 np0005580781 python3.9[137540]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:05 np0005580781 systemd[1]: var-lib-containers-storage-overlay-68cd0829cf029a9df6abd0bda56f746a8e92c21c700fe5d7aa69c77bc2a47835-merged.mount: Deactivated successfully.
Jan 10 12:06:06 np0005580781 podman[137355]: 2026-01-10 17:06:06.009790748 +0000 UTC m=+0.794832242 container remove 0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:06:06 np0005580781 systemd[1]: libpod-conmon-0c97a0044c7abb5a6a1e76536c30b6ab710cbbd1a908806c06e0fbdfc981f41f.scope: Deactivated successfully.
Jan 10 12:06:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:06 np0005580781 podman[137752]: 2026-01-10 17:06:06.597897092 +0000 UTC m=+0.070564006 container create c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:06:06 np0005580781 systemd[1]: Started libpod-conmon-c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec.scope.
Jan 10 12:06:06 np0005580781 podman[137752]: 2026-01-10 17:06:06.568779763 +0000 UTC m=+0.041446717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:06:06 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:06:06 np0005580781 podman[137752]: 2026-01-10 17:06:06.702982572 +0000 UTC m=+0.175649456 container init c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 12:06:06 np0005580781 podman[137752]: 2026-01-10 17:06:06.715616019 +0000 UTC m=+0.188282923 container start c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:06:06 np0005580781 podman[137752]: 2026-01-10 17:06:06.720195944 +0000 UTC m=+0.192862848 container attach c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 12:06:06 np0005580781 zealous_mendeleev[137788]: 167 167
Jan 10 12:06:06 np0005580781 systemd[1]: libpod-c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec.scope: Deactivated successfully.
Jan 10 12:06:06 np0005580781 podman[137752]: 2026-01-10 17:06:06.723346571 +0000 UTC m=+0.196013435 container died c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:06:06 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a4e11053760cdabbe73411ba514e0219a675a2e81a91329aa0cec200da5ede09-merged.mount: Deactivated successfully.
Jan 10 12:06:06 np0005580781 podman[137752]: 2026-01-10 17:06:06.776733534 +0000 UTC m=+0.249400408 container remove c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_mendeleev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 10 12:06:06 np0005580781 systemd[1]: libpod-conmon-c9db3bf65a533a15cccb459a5e33a575522280795452f2dad1b327011d7fe8ec.scope: Deactivated successfully.
Jan 10 12:06:06 np0005580781 python3.9[137785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:06 np0005580781 podman[137835]: 2026-01-10 17:06:06.961811218 +0000 UTC m=+0.053284202 container create 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 10 12:06:06 np0005580781 systemd[1]: Started libpod-conmon-7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6.scope.
Jan 10 12:06:07 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:06:07 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:07 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:07 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:07 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:07 np0005580781 podman[137835]: 2026-01-10 17:06:06.941138182 +0000 UTC m=+0.032611206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:06:07 np0005580781 podman[137835]: 2026-01-10 17:06:07.037867743 +0000 UTC m=+0.129340737 container init 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 10 12:06:07 np0005580781 podman[137835]: 2026-01-10 17:06:07.046290874 +0000 UTC m=+0.137763878 container start 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:06:07 np0005580781 podman[137835]: 2026-01-10 17:06:07.049967105 +0000 UTC m=+0.141440089 container attach 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:07 np0005580781 python3.9[137911]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]: {
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:    "0": [
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:        {
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "devices": [
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "/dev/loop3"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            ],
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_name": "ceph_lv0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_size": "21470642176",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "name": "ceph_lv0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "tags": {
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cluster_name": "ceph",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.crush_device_class": "",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.encrypted": "0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.objectstore": "bluestore",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osd_id": "0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.type": "block",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.vdo": "0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.with_tpm": "0"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            },
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "type": "block",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "vg_name": "ceph_vg0"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:        }
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:    ],
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:    "1": [
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:        {
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "devices": [
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "/dev/loop4"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            ],
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_name": "ceph_lv1",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_size": "21470642176",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "name": "ceph_lv1",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "tags": {
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cluster_name": "ceph",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.crush_device_class": "",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.encrypted": "0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.objectstore": "bluestore",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osd_id": "1",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.type": "block",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.vdo": "0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.with_tpm": "0"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            },
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "type": "block",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "vg_name": "ceph_vg1"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:        }
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:    ],
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:    "2": [
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:        {
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "devices": [
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "/dev/loop5"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            ],
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_name": "ceph_lv2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_size": "21470642176",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "name": "ceph_lv2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "tags": {
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.cluster_name": "ceph",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.crush_device_class": "",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.encrypted": "0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.objectstore": "bluestore",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osd_id": "2",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.type": "block",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.vdo": "0",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:                "ceph.with_tpm": "0"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            },
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "type": "block",
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:            "vg_name": "ceph_vg2"
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:        }
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]:    ]
Jan 10 12:06:07 np0005580781 vigilant_lalande[137878]: }
Jan 10 12:06:07 np0005580781 systemd[1]: libpod-7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6.scope: Deactivated successfully.
Jan 10 12:06:07 np0005580781 podman[137835]: 2026-01-10 17:06:07.451154393 +0000 UTC m=+0.542627417 container died 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:06:07 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9e2bbbaaab85e1878fe74adeee61f3b6d2e0b7c859eab06544070743a499bf4d-merged.mount: Deactivated successfully.
Jan 10 12:06:07 np0005580781 podman[137835]: 2026-01-10 17:06:07.509623316 +0000 UTC m=+0.601096310 container remove 7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:06:07 np0005580781 systemd[1]: libpod-conmon-7a02d0b23ce16db462e0fea6d14ac4a0f182a5765ad9906efdc3c12f755987b6.scope: Deactivated successfully.
Jan 10 12:06:07 np0005580781 podman[138144]: 2026-01-10 17:06:07.963836778 +0000 UTC m=+0.042907797 container create 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:07 np0005580781 python3.9[138131]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:08 np0005580781 systemd[1]: Started libpod-conmon-97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c.scope.
Jan 10 12:06:08 np0005580781 podman[138144]: 2026-01-10 17:06:07.948087456 +0000 UTC m=+0.027158495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:06:08 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:06:08 np0005580781 podman[138144]: 2026-01-10 17:06:08.064898889 +0000 UTC m=+0.143969908 container init 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 12:06:08 np0005580781 podman[138144]: 2026-01-10 17:06:08.070895923 +0000 UTC m=+0.149966942 container start 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 12:06:08 np0005580781 podman[138144]: 2026-01-10 17:06:08.074552653 +0000 UTC m=+0.153623672 container attach 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:08 np0005580781 adoring_dewdney[138163]: 167 167
Jan 10 12:06:08 np0005580781 systemd[1]: libpod-97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c.scope: Deactivated successfully.
Jan 10 12:06:08 np0005580781 podman[138144]: 2026-01-10 17:06:08.07735591 +0000 UTC m=+0.156426929 container died 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 12:06:08 np0005580781 systemd[1]: var-lib-containers-storage-overlay-7f4470869fa72f9c7077527baa5ea2668b5484f728db6f34ae3b45603db0cb47-merged.mount: Deactivated successfully.
Jan 10 12:06:08 np0005580781 podman[138144]: 2026-01-10 17:06:08.116867403 +0000 UTC m=+0.195938422 container remove 97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 12:06:08 np0005580781 systemd[1]: libpod-conmon-97573d3a33e0b90ec3bf5a965467451e63096a3c8f19a17faa9e470890ca4d2c.scope: Deactivated successfully.
Jan 10 12:06:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:08 np0005580781 podman[138258]: 2026-01-10 17:06:08.28271456 +0000 UTC m=+0.045232471 container create 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:06:08 np0005580781 systemd[1]: Started libpod-conmon-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope.
Jan 10 12:06:08 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:06:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:08 np0005580781 podman[138258]: 2026-01-10 17:06:08.351069924 +0000 UTC m=+0.113587905 container init 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:06:08 np0005580781 podman[138258]: 2026-01-10 17:06:08.261052596 +0000 UTC m=+0.023570567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:06:08 np0005580781 podman[138258]: 2026-01-10 17:06:08.361908481 +0000 UTC m=+0.124426372 container start 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 12:06:08 np0005580781 podman[138258]: 2026-01-10 17:06:08.365529471 +0000 UTC m=+0.128047412 container attach 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 12:06:08 np0005580781 python3.9[138270]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:06:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:06:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:06:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:06:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:06:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:06:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:09 np0005580781 lvm[138511]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:06:09 np0005580781 lvm[138511]: VG ceph_vg2 finished
Jan 10 12:06:09 np0005580781 lvm[138509]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:06:09 np0005580781 lvm[138509]: VG ceph_vg1 finished
Jan 10 12:06:09 np0005580781 lvm[138508]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:06:09 np0005580781 lvm[138508]: VG ceph_vg0 finished
Jan 10 12:06:09 np0005580781 python3.9[138496]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:09 np0005580781 amazing_benz[138278]: {}
Jan 10 12:06:09 np0005580781 systemd[1]: libpod-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope: Deactivated successfully.
Jan 10 12:06:09 np0005580781 systemd[1]: libpod-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope: Consumed 1.370s CPU time.
Jan 10 12:06:09 np0005580781 podman[138258]: 2026-01-10 17:06:09.219586785 +0000 UTC m=+0.982104676 container died 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:06:09 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e9a64a5354f73828a27cfd3e17157f7382d32325a647daba825f9b8d083d970e-merged.mount: Deactivated successfully.
Jan 10 12:06:09 np0005580781 podman[138258]: 2026-01-10 17:06:09.272114115 +0000 UTC m=+1.034632016 container remove 339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_benz, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 12:06:09 np0005580781 systemd[1]: libpod-conmon-339e3eba406512ab45b4c4b1c65f553b34aa5b14b169ffbbc915d70f5c32a910.scope: Deactivated successfully.
Jan 10 12:06:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:06:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:06:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:06:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:06:09 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:06:09 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:06:09 np0005580781 python3.9[138700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:10 np0005580781 python3.9[138778]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:10 np0005580781 python3.9[138930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:11 np0005580781 python3.9[139008]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:11 np0005580781 python3.9[139160]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:06:11 np0005580781 systemd[1]: Reloading.
Jan 10 12:06:12 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:06:12 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:06:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:12 np0005580781 python3.9[139349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:13 np0005580781 python3.9[139427]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:14 np0005580781 python3.9[139579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:14 np0005580781 python3.9[139657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:15 np0005580781 python3.9[139809]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:06:15 np0005580781 systemd[1]: Reloading.
Jan 10 12:06:15 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:06:15 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:06:15 np0005580781 systemd[1]: Starting Create netns directory...
Jan 10 12:06:15 np0005580781 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 12:06:15 np0005580781 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 12:06:15 np0005580781 systemd[1]: Finished Create netns directory.
Jan 10 12:06:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:16 np0005580781 python3.9[140002]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:17 np0005580781 python3.9[140154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:18 np0005580781 python3.9[140277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064776.947781-463-260263131858153/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:19 np0005580781 python3.9[140429]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:19 np0005580781 python3.9[140581]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:20 np0005580781 python3.9[140733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:21 np0005580781 python3.9[140856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064780.1669548-496-7010102275505/.source.json _original_basename=.rr0bslbi follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:22 np0005580781 python3.9[141006]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:24 np0005580781 python3.9[141429]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 10 12:06:25 np0005580781 python3.9[141581]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 12:06:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:26 np0005580781 python3[141733]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 12:06:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.083175) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789083385, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 777, "num_deletes": 251, "total_data_size": 700175, "memory_usage": 714144, "flush_reason": "Manual Compaction"}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789089317, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 448426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6833, "largest_seqno": 7609, "table_properties": {"data_size": 445179, "index_size": 1091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8280, "raw_average_key_size": 19, "raw_value_size": 438295, "raw_average_value_size": 1031, "num_data_blocks": 52, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064719, "oldest_key_time": 1768064719, "file_creation_time": 1768064789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6145 microseconds, and 3384 cpu microseconds.
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.089378) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 448426 bytes OK
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.089408) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.090867) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.090895) EVENT_LOG_v1 {"time_micros": 1768064789090885, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.090926) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 696261, prev total WAL file size 696261, number of live WAL files 2.
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.091954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(437KB)], [20(5700KB)]
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789092106, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 6285746, "oldest_snapshot_seqno": -1}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2635 keys, 4600375 bytes, temperature: kUnknown
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789135181, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 4600375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4580544, "index_size": 12170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 61213, "raw_average_key_size": 23, "raw_value_size": 4531027, "raw_average_value_size": 1719, "num_data_blocks": 550, "num_entries": 2635, "num_filter_entries": 2635, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768064789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.135762) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 4600375 bytes
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.137480) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.6 rd, 106.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.6 +0.0 blob) out(4.4 +0.0 blob), read-write-amplify(24.3) write-amplify(10.3) OK, records in: 3120, records dropped: 485 output_compression: NoCompression
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.137515) EVENT_LOG_v1 {"time_micros": 1768064789137493, "job": 6, "event": "compaction_finished", "compaction_time_micros": 43166, "compaction_time_cpu_micros": 21223, "output_level": 6, "num_output_files": 1, "total_output_size": 4600375, "num_input_records": 3120, "num_output_records": 2635, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789137923, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064789139170, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.091660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:06:29 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:06:29.139299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:06:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:32 np0005580781 podman[141746]: 2026-01-10 17:06:32.055921262 +0000 UTC m=+5.323549676 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 10 12:06:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:32 np0005580781 podman[141863]: 2026-01-10 17:06:32.151123932 +0000 UTC m=+0.020923405 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 10 12:06:32 np0005580781 podman[141863]: 2026-01-10 17:06:32.840787488 +0000 UTC m=+0.710586981 container create a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 10 12:06:32 np0005580781 python3[141733]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 10 12:06:33 np0005580781 python3.9[142053]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:06:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:34 np0005580781 python3.9[142207]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:35 np0005580781 python3.9[142283]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:06:36 np0005580781 python3.9[142434]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064795.1105042-574-111213912652636/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:36 np0005580781 python3.9[142510]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 12:06:36 np0005580781 systemd[1]: Reloading.
Jan 10 12:06:36 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:06:36 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:06:37 np0005580781 python3.9[142622]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:06:37 np0005580781 systemd[1]: Reloading.
Jan 10 12:06:37 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:06:37 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:06:38
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups']
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:06:38 np0005580781 systemd[1]: Starting ovn_controller container...
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:38 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:06:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89c32a35016ce12f156ce7d42ff13e3d9db2ffe7b430448d292e6f318ab3d1b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 10 12:06:38 np0005580781 systemd[1]: Started /usr/bin/podman healthcheck run a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f.
Jan 10 12:06:38 np0005580781 podman[142663]: 2026-01-10 17:06:38.335249349 +0000 UTC m=+0.167162614 container init a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + sudo -E kolla_set_configs
Jan 10 12:06:38 np0005580781 podman[142663]: 2026-01-10 17:06:38.377052115 +0000 UTC m=+0.208965320 container start a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 12:06:38 np0005580781 edpm-start-podman-container[142663]: ovn_controller
Jan 10 12:06:38 np0005580781 systemd[1]: Created slice User Slice of UID 0.
Jan 10 12:06:38 np0005580781 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 10 12:06:38 np0005580781 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 10 12:06:38 np0005580781 systemd[1]: Starting User Manager for UID 0...
Jan 10 12:06:38 np0005580781 edpm-start-podman-container[142662]: Creating additional drop-in dependency for "ovn_controller" (a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f)
Jan 10 12:06:38 np0005580781 podman[142685]: 2026-01-10 17:06:38.487866573 +0000 UTC m=+0.093933176 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 12:06:38 np0005580781 systemd[1]: a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f-43958acf483cf7eb.service: Main process exited, code=exited, status=1/FAILURE
Jan 10 12:06:38 np0005580781 systemd[1]: a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f-43958acf483cf7eb.service: Failed with result 'exit-code'.
Jan 10 12:06:38 np0005580781 systemd[1]: Reloading.
Jan 10 12:06:38 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:06:38 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:06:38 np0005580781 systemd[142718]: Queued start job for default target Main User Target.
Jan 10 12:06:38 np0005580781 systemd[142718]: Created slice User Application Slice.
Jan 10 12:06:38 np0005580781 systemd[142718]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 10 12:06:38 np0005580781 systemd[142718]: Started Daily Cleanup of User's Temporary Directories.
Jan 10 12:06:38 np0005580781 systemd[142718]: Reached target Paths.
Jan 10 12:06:38 np0005580781 systemd[142718]: Reached target Timers.
Jan 10 12:06:38 np0005580781 systemd[142718]: Starting D-Bus User Message Bus Socket...
Jan 10 12:06:38 np0005580781 systemd[142718]: Starting Create User's Volatile Files and Directories...
Jan 10 12:06:38 np0005580781 systemd[142718]: Finished Create User's Volatile Files and Directories.
Jan 10 12:06:38 np0005580781 systemd[142718]: Listening on D-Bus User Message Bus Socket.
Jan 10 12:06:38 np0005580781 systemd[142718]: Reached target Sockets.
Jan 10 12:06:38 np0005580781 systemd[142718]: Reached target Basic System.
Jan 10 12:06:38 np0005580781 systemd[142718]: Reached target Main User Target.
Jan 10 12:06:38 np0005580781 systemd[142718]: Startup finished in 205ms.
Jan 10 12:06:38 np0005580781 systemd[1]: Started User Manager for UID 0.
Jan 10 12:06:38 np0005580781 systemd[1]: Started Session c1 of User root.
Jan 10 12:06:38 np0005580781 systemd[1]: Started ovn_controller container.
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: INFO:__main__:Validating config file
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: INFO:__main__:Writing out command to execute
Jan 10 12:06:38 np0005580781 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: ++ cat /run_command
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + ARGS=
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + sudo kolla_copy_cacerts
Jan 10 12:06:38 np0005580781 systemd[1]: Started Session c2 of User root.
Jan 10 12:06:38 np0005580781 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + [[ ! -n '' ]]
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + . kolla_extend_start
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + umask 0022
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 10 12:06:38 np0005580781 NetworkManager[49047]: <info>  [1768064798.9816] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:06:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:06:38 np0005580781 NetworkManager[49047]: <info>  [1768064798.9827] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 10 12:06:38 np0005580781 NetworkManager[49047]: <warn>  [1768064798.9830] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 10 12:06:38 np0005580781 NetworkManager[49047]: <info>  [1768064798.9841] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 10 12:06:38 np0005580781 NetworkManager[49047]: <info>  [1768064798.9851] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 10 12:06:38 np0005580781 NetworkManager[49047]: <info>  [1768064798.9857] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 10 12:06:38 np0005580781 kernel: br-int: entered promiscuous mode
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 10 12:06:38 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:38Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 10 12:06:39 np0005580781 NetworkManager[49047]: <info>  [1768064799.0119] manager: (ovn-198a04-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 10 12:06:39 np0005580781 kernel: genev_sys_6081: entered promiscuous mode
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:06:39 np0005580781 NetworkManager[49047]: <info>  [1768064799.0372] device (genev_sys_6081): carrier: link connected
Jan 10 12:06:39 np0005580781 NetworkManager[49047]: <info>  [1768064799.0375] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 12:06:39 np0005580781 systemd-udevd[142814]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 12:06:39 np0005580781 systemd-udevd[142815]: Network interface NamePolicy= disabled on kernel command line.
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:06:39 np0005580781 ovn_controller[142678]: 2026-01-10T17:06:39Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:06:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:06:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:39 np0005580781 python3.9[142945]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 10 12:06:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:40 np0005580781 python3.9[143097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:41 np0005580781 python3.9[143220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064800.2842486-619-239591908359752/.source.yaml _original_basename=.rn5e8xzu follow=False checksum=3b0154ee2942b8cfaec22aa738d9e56f48fa5c3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:06:42 np0005580781 python3.9[143372]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:06:42 np0005580781 ovs-vsctl[143373]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 10 12:06:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:42 np0005580781 python3.9[143525]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:06:42 np0005580781 ovs-vsctl[143527]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 10 12:06:43 np0005580781 python3.9[143680]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:06:43 np0005580781 ovs-vsctl[143681]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 10 12:06:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:44 np0005580781 systemd[1]: session-46.scope: Deactivated successfully.
Jan 10 12:06:44 np0005580781 systemd[1]: session-46.scope: Consumed 1min 2.304s CPU time.
Jan 10 12:06:44 np0005580781 systemd-logind[798]: Session 46 logged out. Waiting for processes to exit.
Jan 10 12:06:44 np0005580781 systemd-logind[798]: Removed session 46.
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:06:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:06:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:48 np0005580781 systemd[1]: Stopping User Manager for UID 0...
Jan 10 12:06:48 np0005580781 systemd[142718]: Activating special unit Exit the Session...
Jan 10 12:06:48 np0005580781 systemd[142718]: Stopped target Main User Target.
Jan 10 12:06:48 np0005580781 systemd[142718]: Stopped target Basic System.
Jan 10 12:06:48 np0005580781 systemd[142718]: Stopped target Paths.
Jan 10 12:06:48 np0005580781 systemd[142718]: Stopped target Sockets.
Jan 10 12:06:48 np0005580781 systemd[142718]: Stopped target Timers.
Jan 10 12:06:48 np0005580781 systemd[142718]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 10 12:06:48 np0005580781 systemd[142718]: Closed D-Bus User Message Bus Socket.
Jan 10 12:06:48 np0005580781 systemd[142718]: Stopped Create User's Volatile Files and Directories.
Jan 10 12:06:48 np0005580781 systemd[142718]: Removed slice User Application Slice.
Jan 10 12:06:48 np0005580781 systemd[142718]: Reached target Shutdown.
Jan 10 12:06:48 np0005580781 systemd[142718]: Finished Exit the Session.
Jan 10 12:06:48 np0005580781 systemd[142718]: Reached target Exit the Session.
Jan 10 12:06:48 np0005580781 systemd[1]: user@0.service: Deactivated successfully.
Jan 10 12:06:48 np0005580781 systemd[1]: Stopped User Manager for UID 0.
Jan 10 12:06:49 np0005580781 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 10 12:06:49 np0005580781 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 10 12:06:49 np0005580781 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 10 12:06:49 np0005580781 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 10 12:06:49 np0005580781 systemd[1]: Removed slice User Slice of UID 0.
Jan 10 12:06:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:49 np0005580781 systemd-logind[798]: New session 48 of user zuul.
Jan 10 12:06:49 np0005580781 systemd[1]: Started Session 48 of User zuul.
Jan 10 12:06:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:50 np0005580781 python3.9[143860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:06:51 np0005580781 python3.9[144016]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:52 np0005580781 python3.9[144168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:53 np0005580781 python3.9[144320]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:53 np0005580781 python3.9[144472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:54 np0005580781 python3.9[144624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:55 np0005580781 python3.9[144774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:06:56 np0005580781 python3.9[144926]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 10 12:06:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:57 np0005580781 python3.9[145077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:06:58 np0005580781 python3.9[145198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064816.9693048-81-192797117813710/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:06:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:06:59 np0005580781 python3.9[145348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:06:59 np0005580781 python3.9[145469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064818.6912684-96-281359478951605/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:00 np0005580781 python3.9[145621]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:07:01 np0005580781 python3.9[145705]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:07:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:04 np0005580781 python3.9[145858]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:07:05 np0005580781 python3.9[146011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:05 np0005580781 python3.9[146132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064824.706127-133-247400209816762/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:06 np0005580781 python3.9[146282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:06 np0005580781 python3.9[146403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064825.883845-133-135031237273606/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:08 np0005580781 python3.9[146553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:08 np0005580781 ovn_controller[142678]: 2026-01-10T17:07:08Z|00025|memory|INFO|16384 kB peak resident set size after 29.8 seconds
Jan 10 12:07:08 np0005580781 ovn_controller[142678]: 2026-01-10T17:07:08Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 10 12:07:08 np0005580781 podman[146648]: 2026-01-10 17:07:08.784683772 +0000 UTC m=+0.170203790 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 12:07:08 np0005580781 python3.9[146684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064827.7651436-177-116100544987916/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:07:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:07:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:07:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:07:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:07:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:07:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:09 np0005580781 python3.9[146850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:10 np0005580781 python3.9[147035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064829.0276961-177-62603822168689/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:07:10 np0005580781 podman[147214]: 2026-01-10 17:07:10.656989154 +0000 UTC m=+0.041997677 container create f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:07:10 np0005580781 systemd[1]: Started libpod-conmon-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope.
Jan 10 12:07:10 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:07:10 np0005580781 podman[147214]: 2026-01-10 17:07:10.637966003 +0000 UTC m=+0.022974556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:07:10 np0005580781 podman[147214]: 2026-01-10 17:07:10.738793363 +0000 UTC m=+0.123801916 container init f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 12:07:10 np0005580781 podman[147214]: 2026-01-10 17:07:10.745466236 +0000 UTC m=+0.130474769 container start f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:07:10 np0005580781 systemd[1]: libpod-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope: Deactivated successfully.
Jan 10 12:07:10 np0005580781 nifty_diffie[147279]: 167 167
Jan 10 12:07:10 np0005580781 podman[147214]: 2026-01-10 17:07:10.750481731 +0000 UTC m=+0.135490294 container attach f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 12:07:10 np0005580781 conmon[147279]: conmon f08a367ef13e9791d80e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope/container/memory.events
Jan 10 12:07:10 np0005580781 podman[147214]: 2026-01-10 17:07:10.753669284 +0000 UTC m=+0.138677807 container died f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 12:07:10 np0005580781 systemd[1]: var-lib-containers-storage-overlay-23153237b81e905ca124b33bdd0ec8edd15ca9e5c5b1f9c5ba6fe1c3bd003585-merged.mount: Deactivated successfully.
Jan 10 12:07:10 np0005580781 podman[147214]: 2026-01-10 17:07:10.799907923 +0000 UTC m=+0.184916456 container remove f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 12:07:10 np0005580781 systemd[1]: libpod-conmon-f08a367ef13e9791d80ecd65591c2c69b880f80649a881e41a85995465e39f54.scope: Deactivated successfully.
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:07:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:07:10 np0005580781 python3.9[147283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:07:10 np0005580781 podman[147308]: 2026-01-10 17:07:10.984181929 +0000 UTC m=+0.047738643 container create 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 12:07:11 np0005580781 systemd[1]: Started libpod-conmon-56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c.scope.
Jan 10 12:07:11 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:07:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:11 np0005580781 podman[147308]: 2026-01-10 17:07:10.967336612 +0000 UTC m=+0.030893326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:07:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:11 np0005580781 podman[147308]: 2026-01-10 17:07:11.076075701 +0000 UTC m=+0.139632465 container init 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:07:11 np0005580781 podman[147308]: 2026-01-10 17:07:11.083484095 +0000 UTC m=+0.147040809 container start 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 12:07:11 np0005580781 podman[147308]: 2026-01-10 17:07:11.087748939 +0000 UTC m=+0.151305693 container attach 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 10 12:07:11 np0005580781 focused_morse[147348]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:07:11 np0005580781 focused_morse[147348]: --> All data devices are unavailable
Jan 10 12:07:11 np0005580781 systemd[1]: libpod-56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c.scope: Deactivated successfully.
Jan 10 12:07:11 np0005580781 podman[147308]: 2026-01-10 17:07:11.618310984 +0000 UTC m=+0.681867698 container died 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:07:11 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c83c1b8001abd5020e1df90f9c96f0abcb4b54bcec2849713717daf58e810321-merged.mount: Deactivated successfully.
Jan 10 12:07:11 np0005580781 podman[147308]: 2026-01-10 17:07:11.662991808 +0000 UTC m=+0.726548522 container remove 56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 12:07:11 np0005580781 systemd[1]: libpod-conmon-56196550ee60fca9697dc61414534d6a8cfe559ea58563e81414be875cc4513c.scope: Deactivated successfully.
Jan 10 12:07:11 np0005580781 python3.9[147488]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:12 np0005580781 podman[147684]: 2026-01-10 17:07:12.078124419 +0000 UTC m=+0.037328632 container create de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:07:12 np0005580781 systemd[1]: Started libpod-conmon-de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8.scope.
Jan 10 12:07:12 np0005580781 podman[147684]: 2026-01-10 17:07:12.060609572 +0000 UTC m=+0.019813835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:07:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:07:12 np0005580781 podman[147684]: 2026-01-10 17:07:12.17690797 +0000 UTC m=+0.136112233 container init de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:07:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:12 np0005580781 podman[147684]: 2026-01-10 17:07:12.183184081 +0000 UTC m=+0.142388314 container start de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:07:12 np0005580781 podman[147684]: 2026-01-10 17:07:12.18694229 +0000 UTC m=+0.146146513 container attach de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:07:12 np0005580781 recursing_kalam[147736]: 167 167
Jan 10 12:07:12 np0005580781 systemd[1]: libpod-de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8.scope: Deactivated successfully.
Jan 10 12:07:12 np0005580781 podman[147684]: 2026-01-10 17:07:12.193009996 +0000 UTC m=+0.152214319 container died de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 12:07:12 np0005580781 systemd[1]: var-lib-containers-storage-overlay-097fa46bd5d260fec7d6c12df11a479f6353b8c6c4c07aa76f283c0d686e0450-merged.mount: Deactivated successfully.
Jan 10 12:07:12 np0005580781 podman[147684]: 2026-01-10 17:07:12.244298001 +0000 UTC m=+0.203502254 container remove de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:07:12 np0005580781 systemd[1]: libpod-conmon-de998e4e3f6e30ffd6761c57587a6e439e8dac29ed6857e87f2e41546407e2f8.scope: Deactivated successfully.
Jan 10 12:07:12 np0005580781 python3.9[147738]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:12 np0005580781 podman[147762]: 2026-01-10 17:07:12.450101611 +0000 UTC m=+0.046556759 container create 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:07:12 np0005580781 systemd[1]: Started libpod-conmon-7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f.scope.
Jan 10 12:07:12 np0005580781 podman[147762]: 2026-01-10 17:07:12.42901087 +0000 UTC m=+0.025465998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:07:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:07:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:12 np0005580781 podman[147762]: 2026-01-10 17:07:12.564844294 +0000 UTC m=+0.161299492 container init 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:07:12 np0005580781 podman[147762]: 2026-01-10 17:07:12.578109828 +0000 UTC m=+0.174564966 container start 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:07:12 np0005580781 podman[147762]: 2026-01-10 17:07:12.582434834 +0000 UTC m=+0.178889962 container attach 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:07:12 np0005580781 python3.9[147859]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:12 np0005580781 boring_benz[147802]: {
Jan 10 12:07:12 np0005580781 boring_benz[147802]:    "0": [
Jan 10 12:07:12 np0005580781 boring_benz[147802]:        {
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "devices": [
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "/dev/loop3"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            ],
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_name": "ceph_lv0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_size": "21470642176",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "name": "ceph_lv0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "tags": {
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cluster_name": "ceph",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.crush_device_class": "",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.encrypted": "0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.objectstore": "bluestore",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osd_id": "0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.type": "block",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.vdo": "0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.with_tpm": "0"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            },
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "type": "block",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "vg_name": "ceph_vg0"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:        }
Jan 10 12:07:12 np0005580781 boring_benz[147802]:    ],
Jan 10 12:07:12 np0005580781 boring_benz[147802]:    "1": [
Jan 10 12:07:12 np0005580781 boring_benz[147802]:        {
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "devices": [
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "/dev/loop4"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            ],
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_name": "ceph_lv1",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_size": "21470642176",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "name": "ceph_lv1",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "tags": {
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cluster_name": "ceph",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.crush_device_class": "",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.encrypted": "0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.objectstore": "bluestore",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osd_id": "1",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.type": "block",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.vdo": "0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.with_tpm": "0"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            },
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "type": "block",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "vg_name": "ceph_vg1"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:        }
Jan 10 12:07:12 np0005580781 boring_benz[147802]:    ],
Jan 10 12:07:12 np0005580781 boring_benz[147802]:    "2": [
Jan 10 12:07:12 np0005580781 boring_benz[147802]:        {
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "devices": [
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "/dev/loop5"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            ],
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_name": "ceph_lv2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_size": "21470642176",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "name": "ceph_lv2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "tags": {
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.cluster_name": "ceph",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.crush_device_class": "",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.encrypted": "0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.objectstore": "bluestore",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osd_id": "2",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.type": "block",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.vdo": "0",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:                "ceph.with_tpm": "0"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            },
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "type": "block",
Jan 10 12:07:12 np0005580781 boring_benz[147802]:            "vg_name": "ceph_vg2"
Jan 10 12:07:12 np0005580781 boring_benz[147802]:        }
Jan 10 12:07:12 np0005580781 boring_benz[147802]:    ]
Jan 10 12:07:12 np0005580781 boring_benz[147802]: }
Jan 10 12:07:12 np0005580781 systemd[1]: libpod-7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f.scope: Deactivated successfully.
Jan 10 12:07:12 np0005580781 podman[147762]: 2026-01-10 17:07:12.901965157 +0000 UTC m=+0.498420265 container died 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 12:07:12 np0005580781 systemd[1]: var-lib-containers-storage-overlay-66d2b36944818f97c356e12b85cdaae777c3af169f62982cf15ce45810a2cb69-merged.mount: Deactivated successfully.
Jan 10 12:07:12 np0005580781 podman[147762]: 2026-01-10 17:07:12.950884754 +0000 UTC m=+0.547339862 container remove 7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:07:12 np0005580781 systemd[1]: libpod-conmon-7cc233b07ad19ac7596380cc1a3947fc25f3dc41d6068d77a0e402423f017f1f.scope: Deactivated successfully.
Jan 10 12:07:13 np0005580781 podman[148094]: 2026-01-10 17:07:13.39414146 +0000 UTC m=+0.046070025 container create 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 12:07:13 np0005580781 systemd[1]: Started libpod-conmon-35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d.scope.
Jan 10 12:07:13 np0005580781 python3.9[148079]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:07:13 np0005580781 podman[148094]: 2026-01-10 17:07:13.372893605 +0000 UTC m=+0.024822170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:07:13 np0005580781 podman[148094]: 2026-01-10 17:07:13.470401279 +0000 UTC m=+0.122329874 container init 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:07:13 np0005580781 podman[148094]: 2026-01-10 17:07:13.47630701 +0000 UTC m=+0.128235575 container start 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 12:07:13 np0005580781 podman[148094]: 2026-01-10 17:07:13.480301896 +0000 UTC m=+0.132230501 container attach 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:07:13 np0005580781 competent_hopper[148110]: 167 167
Jan 10 12:07:13 np0005580781 systemd[1]: libpod-35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d.scope: Deactivated successfully.
Jan 10 12:07:13 np0005580781 podman[148094]: 2026-01-10 17:07:13.484986141 +0000 UTC m=+0.136914706 container died 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 12:07:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1b8ec561fcb06c3f3d8af0c795d550277fdfb5562bbc7ad611e54d4726a0e680-merged.mount: Deactivated successfully.
Jan 10 12:07:13 np0005580781 podman[148094]: 2026-01-10 17:07:13.528287225 +0000 UTC m=+0.180215810 container remove 35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:07:13 np0005580781 systemd[1]: libpod-conmon-35daaa43208955800346835fd7a3119dcb4255ea7fcf8744265552307992fd1d.scope: Deactivated successfully.
Jan 10 12:07:13 np0005580781 podman[148180]: 2026-01-10 17:07:13.735565268 +0000 UTC m=+0.057388263 container create bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:07:13 np0005580781 systemd[1]: Started libpod-conmon-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope.
Jan 10 12:07:13 np0005580781 podman[148180]: 2026-01-10 17:07:13.707298269 +0000 UTC m=+0.029121264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:07:13 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:07:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:13 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:13 np0005580781 podman[148180]: 2026-01-10 17:07:13.841913738 +0000 UTC m=+0.163736743 container init bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:07:13 np0005580781 podman[148180]: 2026-01-10 17:07:13.853007969 +0000 UTC m=+0.174830934 container start bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 12:07:13 np0005580781 podman[148180]: 2026-01-10 17:07:13.859289741 +0000 UTC m=+0.181112736 container attach bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 12:07:13 np0005580781 python3.9[148227]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:14 np0005580781 lvm[148459]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:07:14 np0005580781 lvm[148459]: VG ceph_vg0 finished
Jan 10 12:07:14 np0005580781 lvm[148460]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:07:14 np0005580781 lvm[148460]: VG ceph_vg1 finished
Jan 10 12:07:14 np0005580781 lvm[148462]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:07:14 np0005580781 lvm[148462]: VG ceph_vg2 finished
Jan 10 12:07:14 np0005580781 python3.9[148446]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:14 np0005580781 eager_chaum[148228]: {}
Jan 10 12:07:14 np0005580781 systemd[1]: libpod-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope: Deactivated successfully.
Jan 10 12:07:14 np0005580781 podman[148180]: 2026-01-10 17:07:14.69456318 +0000 UTC m=+1.016386135 container died bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 12:07:14 np0005580781 systemd[1]: libpod-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope: Consumed 1.485s CPU time.
Jan 10 12:07:14 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e60ec7a45037fbfe9af98cbbc6354b3bb1f9cc11256e55319754bd3659db0838-merged.mount: Deactivated successfully.
Jan 10 12:07:14 np0005580781 podman[148180]: 2026-01-10 17:07:14.754513517 +0000 UTC m=+1.076336512 container remove bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:07:14 np0005580781 systemd[1]: libpod-conmon-bc186779c388d38772f86a5859fd91a4b58cc67fc9c71815ffd71047255eb6b6.scope: Deactivated successfully.
Jan 10 12:07:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:07:14 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:07:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:07:14 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:07:15 np0005580781 python3.9[148652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:15 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:07:15 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:07:15 np0005580781 python3.9[148730]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:16 np0005580781 python3.9[148882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:17 np0005580781 python3.9[148960]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:18 np0005580781 python3.9[149112]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:07:18 np0005580781 systemd[1]: Reloading.
Jan 10 12:07:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:18 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:07:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:07:18 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:07:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1817 writes, 7853 keys, 1817 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s#012Cumulative WAL: 1817 writes, 1817 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1817 writes, 7853 keys, 1817 commit groups, 1.0 writes per commit group, ingest: 8.61 MB, 0.01 MB/s#012Interval WAL: 1817 writes, 1817 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.7      0.06              0.02         3    0.019       0      0       0.0       0.0#012  L6      1/0    4.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    115.3     98.9      0.10              0.05         2    0.050    6104    774       0.0       0.0#012 Sum      1/0    4.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     73.3    101.0      0.16              0.07         5    0.032    6104    774       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     75.3    103.4      0.15              0.07         4    0.039    6104    774       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    115.3     98.9      0.10              0.05         2    0.050    6104    774       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    111.7      0.05              0.02         2    0.027       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.006, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55efa2bef8d0#2 capacity: 308.00 MB usage: 667.53 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000119 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(41,597.50 KB,0.189447%) FilterBlock(6,24.23 KB,0.00768389%) IndexBlock(6,45.80 KB,0.0145206%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 10 12:07:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:19 np0005580781 python3.9[149301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:19 np0005580781 python3.9[149379]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:20 np0005580781 python3.9[149531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:21 np0005580781 python3.9[149609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:21 np0005580781 python3.9[149761]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:07:21 np0005580781 systemd[1]: Reloading.
Jan 10 12:07:22 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:07:22 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:07:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:22 np0005580781 systemd[1]: Starting Create netns directory...
Jan 10 12:07:22 np0005580781 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 10 12:07:22 np0005580781 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 10 12:07:22 np0005580781 systemd[1]: Finished Create netns directory.
Jan 10 12:07:23 np0005580781 python3.9[149954]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:24 np0005580781 python3.9[150106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:24 np0005580781 python3.9[150229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768064843.5239952-328-75271160631345/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:25 np0005580781 python3.9[150381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:26 np0005580781 python3.9[150533]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:07:27 np0005580781 python3.9[150685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:27 np0005580781 python3.9[150809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064846.5554848-361-121698914602896/.source.json _original_basename=.m1ydec72 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:28 np0005580781 python3.9[150960]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:30 np0005580781 python3.9[151383]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 10 12:07:31 np0005580781 python3.9[151535]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 12:07:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:33 np0005580781 python3[151687]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 12:07:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:07:38
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'volumes', 'backups']
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:07:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:07:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:07:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:40 np0005580781 podman[151772]: 2026-01-10 17:07:40.321225517 +0000 UTC m=+1.318319500 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 12:07:41 np0005580781 podman[151701]: 2026-01-10 17:07:41.760670012 +0000 UTC m=+8.570095148 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 10 12:07:41 np0005580781 podman[151855]: 2026-01-10 17:07:41.89666082 +0000 UTC m=+0.051247025 container create 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 10 12:07:41 np0005580781 podman[151855]: 2026-01-10 17:07:41.871898943 +0000 UTC m=+0.026485158 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 10 12:07:41 np0005580781 python3[151687]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 10 12:07:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:42 np0005580781 python3.9[152042]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:07:43 np0005580781 python3.9[152196]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:43 np0005580781 python3.9[152272]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:07:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:07:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:07:44 np0005580781 python3.9[152423]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768064863.9935336-439-773100210538/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:45 np0005580781 python3.9[152499]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 12:07:45 np0005580781 systemd[1]: Reloading.
Jan 10 12:07:45 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:07:45 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:07:46 np0005580781 python3.9[152609]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:07:46 np0005580781 systemd[1]: Reloading.
Jan 10 12:07:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:46 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:07:46 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:07:46 np0005580781 systemd[1]: Starting ovn_metadata_agent container...
Jan 10 12:07:46 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:07:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9203d38291e46e8251c2b66f9fbb0ed4d4f73da5133d73ec8b17c7b3cb1b6e2d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9203d38291e46e8251c2b66f9fbb0ed4d4f73da5133d73ec8b17c7b3cb1b6e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 10 12:07:46 np0005580781 systemd[1]: Started /usr/bin/podman healthcheck run 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f.
Jan 10 12:07:46 np0005580781 podman[152650]: 2026-01-10 17:07:46.779234827 +0000 UTC m=+0.277119226 container init 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + sudo -E kolla_set_configs
Jan 10 12:07:46 np0005580781 podman[152650]: 2026-01-10 17:07:46.807460325 +0000 UTC m=+0.305344684 container start 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Validating config file
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Copying service configuration files
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Writing out command to execute
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: ++ cat /run_command
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + CMD=neutron-ovn-metadata-agent
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + ARGS=
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + sudo kolla_copy_cacerts
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + [[ ! -n '' ]]
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + . kolla_extend_start
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: Running command: 'neutron-ovn-metadata-agent'
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + umask 0022
Jan 10 12:07:46 np0005580781 ovn_metadata_agent[152665]: + exec neutron-ovn-metadata-agent
Jan 10 12:07:46 np0005580781 edpm-start-podman-container[152650]: ovn_metadata_agent
Jan 10 12:07:47 np0005580781 podman[152673]: 2026-01-10 17:07:47.0013668 +0000 UTC m=+0.180883089 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 10 12:07:47 np0005580781 edpm-start-podman-container[152649]: Creating additional drop-in dependency for "ovn_metadata_agent" (4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f)
Jan 10 12:07:47 np0005580781 systemd[1]: Reloading.
Jan 10 12:07:47 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:07:47 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:07:47 np0005580781 systemd[1]: Started ovn_metadata_agent container.
Jan 10 12:07:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.871 152671 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.871 152671 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.872 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.873 152671 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.874 152671 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.875 152671 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.876 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.877 152671 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.878 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.879 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.880 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.881 152671 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.882 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.883 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.884 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.885 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.886 152671 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.887 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 python3.9[152908]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.888 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.889 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.890 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.891 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.892 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.893 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.894 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.895 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.896 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.897 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.898 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.899 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.900 152671 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.901 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.902 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.903 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.904 152671 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.905 152671 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.950 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.951 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.963 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name fbd04e21-7be2-4eb3-a385-03f0bb540a40 (UUID: fbd04e21-7be2-4eb3-a385-03f0bb540a40) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.987 152671 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.987 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.988 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.988 152671 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.990 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 10 12:07:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:48.996 152671 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.002 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'fbd04e21-7be2-4eb3-a385-03f0bb540a40'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f7a871b1d30>], external_ids={}, name=fbd04e21-7be2-4eb3-a385-03f0bb540a40, nb_cfg_timestamp=1768064807011, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.003 152671 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f7a87133eb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.004 152671 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.008 152671 DEBUG oslo_service.service [-] Started child 152933 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.012 152671 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmply24caqc/privsep.sock']#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.013 152933 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-170691'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.037 152933 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.038 152933 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.038 152933 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.043 152933 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.049 152933 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.056 152933 INFO eventlet.wsgi.server [-] (152933) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 10 12:07:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:49 np0005580781 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.717 152671 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.718 152671 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmply24caqc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.594 153043 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.601 153043 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.605 153043 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.605 153043 INFO oslo.privsep.daemon [-] privsep daemon running as pid 153043#033[00m
Jan 10 12:07:49 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:49.721 153043 DEBUG oslo.privsep.daemon [-] privsep: reply[53cbdb0e-39aa-4805-aa93-cd46619c4370]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 10 12:07:49 np0005580781 python3.9[153066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:07:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.255 153043 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.256 153043 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.256 153043 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:07:50 np0005580781 python3.9[153195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768064869.337805-484-94805663345967/.source.yaml _original_basename=.jo3c30n6 follow=False checksum=24e6f428ca40407899f031d999dfc3af0c87e301 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.771 153043 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8def25-921c-4741-96aa-c0261abfa229]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.777 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, column=external_ids, values=({'neutron:ovn-metadata-id': 'df62b40c-cd70-516a-95e8-1aab1acf968a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.800 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.808 152671 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.808 152671 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.809 152671 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.810 152671 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.811 152671 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.812 152671 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.813 152671 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.814 152671 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.815 152671 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.816 152671 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.817 152671 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.818 152671 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.819 152671 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.820 152671 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.821 152671 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.822 152671 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.823 152671 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.823 152671 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.823 152671 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.824 152671 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.824 152671 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.825 152671 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.826 152671 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.827 152671 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.827 152671 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.827 152671 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.828 152671 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.828 152671 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.828 152671 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.829 152671 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.830 152671 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.831 152671 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.831 152671 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.831 152671 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.832 152671 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.833 152671 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.833 152671 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.833 152671 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 systemd[1]: session-48.scope: Deactivated successfully.
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 systemd[1]: session-48.scope: Consumed 59.281s CPU time.
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.834 152671 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.835 152671 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.835 152671 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.835 152671 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.836 152671 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 systemd-logind[798]: Removed session 48.
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.837 152671 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.837 152671 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.837 152671 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.838 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.838 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.839 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.839 152671 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.839 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.840 152671 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.841 152671 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.841 152671 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.842 152671 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.843 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.844 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.844 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.844 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.845 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.846 152671 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.847 152671 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.848 152671 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.848 152671 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.848 152671 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.849 152671 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.850 152671 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.851 152671 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.852 152671 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.852 152671 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.852 152671 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.853 152671 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.854 152671 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.854 152671 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.854 152671 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.855 152671 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.855 152671 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.855 152671 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.856 152671 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.857 152671 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.857 152671 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.857 152671 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.858 152671 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.859 152671 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.860 152671 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.861 152671 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.862 152671 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.862 152671 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.863 152671 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.864 152671 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.864 152671 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.864 152671 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.865 152671 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.865 152671 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.865 152671 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.866 152671 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.866 152671 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.866 152671 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.867 152671 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.868 152671 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.868 152671 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.868 152671 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.869 152671 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.869 152671 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.869 152671 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.870 152671 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.871 152671 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.871 152671 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.871 152671 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.872 152671 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.873 152671 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.873 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.873 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.874 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.874 152671 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.874 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.875 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.875 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.875 152671 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.876 152671 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.877 152671 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.878 152671 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.878 152671 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.878 152671 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.879 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.880 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.880 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.880 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.881 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.882 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.883 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.884 152671 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:07:50 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:07:50.885 152671 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 10 12:07:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:55 np0005580781 systemd-logind[798]: New session 49 of user zuul.
Jan 10 12:07:55 np0005580781 systemd[1]: Started Session 49 of User zuul.
Jan 10 12:07:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:57 np0005580781 python3.9[153373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:07:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:07:58 np0005580781 python3.9[153529]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:07:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:07:59 np0005580781 python3.9[153694]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 12:07:59 np0005580781 systemd[1]: Reloading.
Jan 10 12:07:59 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:07:59 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:08:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:00 np0005580781 python3.9[153879]: ansible-ansible.builtin.service_facts Invoked
Jan 10 12:08:01 np0005580781 network[153896]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 12:08:01 np0005580781 network[153897]: 'network-scripts' will be removed from distribution in near future.
Jan 10 12:08:01 np0005580781 network[153898]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 12:08:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:05 np0005580781 python3.9[154160]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:08:05 np0005580781 python3.9[154313]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:08:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:06 np0005580781 python3.9[154466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:08:07 np0005580781 python3.9[154619]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:08:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:08 np0005580781 python3.9[154772]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:08:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:08:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:08:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:08:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:08:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:08:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:08:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:09 np0005580781 python3.9[154925]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:08:10 np0005580781 python3.9[155078]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:08:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:11 np0005580781 python3.9[155231]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:11 np0005580781 python3.9[155383]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:12 np0005580781 podman[155507]: 2026-01-10 17:08:12.140809966 +0000 UTC m=+0.119152720 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 10 12:08:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:12 np0005580781 python3.9[155545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:12 np0005580781 python3.9[155711]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:13 np0005580781 python3.9[155863]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:14 np0005580781 python3.9[156015]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:14 np0005580781 python3.9[156167]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:15 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:08:15 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:15 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:08:15 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:15 np0005580781 python3.9[156391]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:08:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:08:16 np0005580781 python3.9[156624]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:16 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:08:16 np0005580781 podman[156766]: 2026-01-10 17:08:16.615241871 +0000 UTC m=+0.044420581 container create 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 10 12:08:16 np0005580781 systemd[1]: Started libpod-conmon-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope.
Jan 10 12:08:16 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:08:16 np0005580781 podman[156766]: 2026-01-10 17:08:16.599186676 +0000 UTC m=+0.028365416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:08:16 np0005580781 podman[156766]: 2026-01-10 17:08:16.700425099 +0000 UTC m=+0.129603829 container init 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 12:08:16 np0005580781 podman[156766]: 2026-01-10 17:08:16.708662304 +0000 UTC m=+0.137841014 container start 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:08:16 np0005580781 podman[156766]: 2026-01-10 17:08:16.712787851 +0000 UTC m=+0.141966591 container attach 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:08:16 np0005580781 systemd[1]: libpod-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope: Deactivated successfully.
Jan 10 12:08:16 np0005580781 hungry_poitras[156827]: 167 167
Jan 10 12:08:16 np0005580781 conmon[156827]: conmon 2e0ebbb1f6add9e37b37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope/container/memory.events
Jan 10 12:08:16 np0005580781 podman[156766]: 2026-01-10 17:08:16.715477841 +0000 UTC m=+0.144656571 container died 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:08:16 np0005580781 systemd[1]: var-lib-containers-storage-overlay-20a296bee2a646f8c157f834cc1633ac4f583f3ee7ae936c82fc34940422c8f5-merged.mount: Deactivated successfully.
Jan 10 12:08:16 np0005580781 podman[156766]: 2026-01-10 17:08:16.755525765 +0000 UTC m=+0.184704485 container remove 2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 12:08:16 np0005580781 systemd[1]: libpod-conmon-2e0ebbb1f6add9e37b379af0fe26fbefe5a02467f6fc5b9dc7207f04842855c0.scope: Deactivated successfully.
Jan 10 12:08:16 np0005580781 python3.9[156860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:16 np0005580781 podman[156880]: 2026-01-10 17:08:16.960541625 +0000 UTC m=+0.069121824 container create 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:08:17 np0005580781 systemd[1]: Started libpod-conmon-5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f.scope.
Jan 10 12:08:17 np0005580781 podman[156880]: 2026-01-10 17:08:16.929538012 +0000 UTC m=+0.038118281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:08:17 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:08:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:17 np0005580781 podman[156880]: 2026-01-10 17:08:17.059320516 +0000 UTC m=+0.167900735 container init 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 12:08:17 np0005580781 podman[156880]: 2026-01-10 17:08:17.070763567 +0000 UTC m=+0.179343766 container start 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:08:17 np0005580781 podman[156880]: 2026-01-10 17:08:17.075745923 +0000 UTC m=+0.184326122 container attach 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:08:17 np0005580781 podman[156946]: 2026-01-10 17:08:17.117836635 +0000 UTC m=+0.058844691 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 10 12:08:17 np0005580781 python3.9[157072]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:17 np0005580781 fervent_cannon[156926]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:08:17 np0005580781 fervent_cannon[156926]: --> All data devices are unavailable
Jan 10 12:08:17 np0005580781 systemd[1]: libpod-5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f.scope: Deactivated successfully.
Jan 10 12:08:17 np0005580781 podman[156880]: 2026-01-10 17:08:17.731322664 +0000 UTC m=+0.839902863 container died 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 12:08:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay-f55854b2bf3f5dd1b536b4ecb8d8e10081c9b80a2ce0f998d21e3bfff1c38012-merged.mount: Deactivated successfully.
Jan 10 12:08:17 np0005580781 podman[156880]: 2026-01-10 17:08:17.794273811 +0000 UTC m=+0.902854030 container remove 5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:08:17 np0005580781 systemd[1]: libpod-conmon-5fc77d048ce4e11bf435608587f8aa36167895819c75995b720fce85671a174f.scope: Deactivated successfully.
Jan 10 12:08:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:18 np0005580781 python3.9[157298]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:18 np0005580781 podman[157311]: 2026-01-10 17:08:18.363608968 +0000 UTC m=+0.058111896 container create 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 12:08:18 np0005580781 systemd[1]: Started libpod-conmon-442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c.scope.
Jan 10 12:08:18 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:08:18 np0005580781 podman[157311]: 2026-01-10 17:08:18.341952837 +0000 UTC m=+0.036455825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:08:18 np0005580781 podman[157311]: 2026-01-10 17:08:18.447587166 +0000 UTC m=+0.142090144 container init 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:08:18 np0005580781 podman[157311]: 2026-01-10 17:08:18.455608743 +0000 UTC m=+0.150111691 container start 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:08:18 np0005580781 podman[157311]: 2026-01-10 17:08:18.459483443 +0000 UTC m=+0.153986391 container attach 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 12:08:18 np0005580781 priceless_babbage[157357]: 167 167
Jan 10 12:08:18 np0005580781 systemd[1]: libpod-442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c.scope: Deactivated successfully.
Jan 10 12:08:18 np0005580781 podman[157311]: 2026-01-10 17:08:18.462429821 +0000 UTC m=+0.156932839 container died 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:08:18 np0005580781 systemd[1]: var-lib-containers-storage-overlay-afaaee9c5a08b90be4f2bd0db0aa4f9c1b9f36424df35ae351db25b69b70c2e8-merged.mount: Deactivated successfully.
Jan 10 12:08:18 np0005580781 podman[157311]: 2026-01-10 17:08:18.505980852 +0000 UTC m=+0.200483820 container remove 442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:08:18 np0005580781 systemd[1]: libpod-conmon-442715ec5d223de05c78ae87a34e9b32679c93da167f91f11ed9c67171f0d13c.scope: Deactivated successfully.
Jan 10 12:08:18 np0005580781 podman[157472]: 2026-01-10 17:08:18.709301755 +0000 UTC m=+0.047887556 container create 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 12:08:18 np0005580781 systemd[1]: Started libpod-conmon-02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c.scope.
Jan 10 12:08:18 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:08:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:18 np0005580781 podman[157472]: 2026-01-10 17:08:18.689051641 +0000 UTC m=+0.027637482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:08:18 np0005580781 podman[157472]: 2026-01-10 17:08:18.788729382 +0000 UTC m=+0.127315193 container init 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:08:18 np0005580781 podman[157472]: 2026-01-10 17:08:18.795642002 +0000 UTC m=+0.134227813 container start 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 12:08:18 np0005580781 podman[157472]: 2026-01-10 17:08:18.79979339 +0000 UTC m=+0.138379191 container attach 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:08:18 np0005580781 python3.9[157520]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]: {
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:    "0": [
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:        {
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "devices": [
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "/dev/loop3"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            ],
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_name": "ceph_lv0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_size": "21470642176",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "name": "ceph_lv0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "tags": {
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cluster_name": "ceph",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.crush_device_class": "",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.encrypted": "0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.objectstore": "bluestore",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osd_id": "0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.type": "block",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.vdo": "0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.with_tpm": "0"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            },
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "type": "block",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "vg_name": "ceph_vg0"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:        }
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:    ],
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:    "1": [
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:        {
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "devices": [
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "/dev/loop4"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            ],
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_name": "ceph_lv1",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_size": "21470642176",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "name": "ceph_lv1",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "tags": {
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cluster_name": "ceph",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.crush_device_class": "",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.encrypted": "0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.objectstore": "bluestore",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osd_id": "1",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.type": "block",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.vdo": "0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.with_tpm": "0"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            },
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "type": "block",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "vg_name": "ceph_vg1"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:        }
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:    ],
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:    "2": [
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:        {
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "devices": [
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "/dev/loop5"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            ],
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_name": "ceph_lv2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_size": "21470642176",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "name": "ceph_lv2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "tags": {
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.cluster_name": "ceph",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.crush_device_class": "",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.encrypted": "0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.objectstore": "bluestore",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osd_id": "2",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.type": "block",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.vdo": "0",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:                "ceph.with_tpm": "0"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            },
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "type": "block",
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:            "vg_name": "ceph_vg2"
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:        }
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]:    ]
Jan 10 12:08:19 np0005580781 nifty_swirles[157518]: }
Jan 10 12:08:19 np0005580781 systemd[1]: libpod-02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c.scope: Deactivated successfully.
Jan 10 12:08:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:19 np0005580781 podman[157472]: 2026-01-10 17:08:19.112492688 +0000 UTC m=+0.451078549 container died 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:08:19 np0005580781 systemd[1]: var-lib-containers-storage-overlay-942cc0f0a76a05ad39c978a9c7a55a6dacad3284d688c70616d29656de40397d-merged.mount: Deactivated successfully.
Jan 10 12:08:19 np0005580781 podman[157472]: 2026-01-10 17:08:19.162164203 +0000 UTC m=+0.500749994 container remove 02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swirles, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 12:08:19 np0005580781 systemd[1]: libpod-conmon-02db77fe727d94b9400ee0d0ecd0e8d74649f0f1b0f8717635a26567cac0ad3c.scope: Deactivated successfully.
Jan 10 12:08:19 np0005580781 podman[157754]: 2026-01-10 17:08:19.671594704 +0000 UTC m=+0.058321764 container create 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 12:08:19 np0005580781 python3.9[157741]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:08:19 np0005580781 systemd[1]: Started libpod-conmon-31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a.scope.
Jan 10 12:08:19 np0005580781 podman[157754]: 2026-01-10 17:08:19.642092872 +0000 UTC m=+0.028819602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:08:19 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:08:19 np0005580781 podman[157754]: 2026-01-10 17:08:19.761075124 +0000 UTC m=+0.147801824 container init 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:08:19 np0005580781 podman[157754]: 2026-01-10 17:08:19.768947097 +0000 UTC m=+0.155673777 container start 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:08:19 np0005580781 podman[157754]: 2026-01-10 17:08:19.772415882 +0000 UTC m=+0.159142552 container attach 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 12:08:19 np0005580781 great_bose[157771]: 167 167
Jan 10 12:08:19 np0005580781 systemd[1]: libpod-31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a.scope: Deactivated successfully.
Jan 10 12:08:19 np0005580781 podman[157754]: 2026-01-10 17:08:19.77533933 +0000 UTC m=+0.162066010 container died 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:08:19 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e0088c4e02a2066e0d8fc8562dc417a280e280d52cd59bd3d1c06b49ff696c07-merged.mount: Deactivated successfully.
Jan 10 12:08:19 np0005580781 podman[157754]: 2026-01-10 17:08:19.818026372 +0000 UTC m=+0.204753052 container remove 31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:08:19 np0005580781 systemd[1]: libpod-conmon-31cf20fcdf3ada6545187732cff7abfebe7d0e5cb1b2e29af40cd1ccf54b464a.scope: Deactivated successfully.
Jan 10 12:08:20 np0005580781 podman[157843]: 2026-01-10 17:08:20.061289846 +0000 UTC m=+0.069165535 container create 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 12:08:20 np0005580781 systemd[1]: Started libpod-conmon-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope.
Jan 10 12:08:20 np0005580781 podman[157843]: 2026-01-10 17:08:20.033435898 +0000 UTC m=+0.041311657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:08:20 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:08:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:08:20 np0005580781 podman[157843]: 2026-01-10 17:08:20.16827535 +0000 UTC m=+0.176151039 container init 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:08:20 np0005580781 podman[157843]: 2026-01-10 17:08:20.174776677 +0000 UTC m=+0.182652386 container start 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 12:08:20 np0005580781 podman[157843]: 2026-01-10 17:08:20.17937244 +0000 UTC m=+0.187248159 container attach 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 12:08:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:20 np0005580781 python3.9[157969]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:20 np0005580781 lvm[158122]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:08:20 np0005580781 lvm[158122]: VG ceph_vg1 finished
Jan 10 12:08:20 np0005580781 lvm[158121]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:08:20 np0005580781 lvm[158121]: VG ceph_vg0 finished
Jan 10 12:08:20 np0005580781 lvm[158124]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:08:20 np0005580781 lvm[158124]: VG ceph_vg2 finished
Jan 10 12:08:20 np0005580781 competent_mcclintock[157897]: {}
Jan 10 12:08:21 np0005580781 systemd[1]: libpod-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope: Deactivated successfully.
Jan 10 12:08:21 np0005580781 systemd[1]: libpod-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope: Consumed 1.281s CPU time.
Jan 10 12:08:21 np0005580781 podman[157843]: 2026-01-10 17:08:21.008234374 +0000 UTC m=+1.016110053 container died 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:08:21 np0005580781 systemd[1]: var-lib-containers-storage-overlay-090d23415d5c160d8db157b74e56f3bbd69f9331d29534364c5a2d6c0bd20d08-merged.mount: Deactivated successfully.
Jan 10 12:08:21 np0005580781 podman[157843]: 2026-01-10 17:08:21.054475114 +0000 UTC m=+1.062350783 container remove 26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:08:21 np0005580781 systemd[1]: libpod-conmon-26a53ef86f64ec35f74f2a8f7a90d87b35e27a295043a32e13280286750d2627.scope: Deactivated successfully.
Jan 10 12:08:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:08:21 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:21 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:08:21 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:21 np0005580781 python3.9[158211]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 12:08:21 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:21 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:08:22 np0005580781 python3.9[158388]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 12:08:22 np0005580781 systemd[1]: Reloading.
Jan 10 12:08:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:22 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:08:22 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:08:23 np0005580781 python3.9[158576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:23 np0005580781 python3.9[158729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:24 np0005580781 python3.9[158882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:25 np0005580781 python3.9[159035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:25 np0005580781 python3.9[159188]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:26 np0005580781 python3.9[159341]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:27 np0005580781 python3.9[159494]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:08:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:28 np0005580781 python3.9[159647]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 10 12:08:29 np0005580781 python3.9[159800]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 12:08:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:08:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s#012Interval WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000158 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 10 12:08:30 np0005580781 python3.9[159958]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 12:08:30 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 12:08:30 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 12:08:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:31 np0005580781 python3.9[160119]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:08:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:32 np0005580781 python3.9[160203]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:08:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:08:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 16.66 MB, 0.03 MB/s#012Interval WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 10 12:08:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:08:38
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:08:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:08:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:08:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:08:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 16.31 MB, 0.03 MB/s#012Interval WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 10 12:08:43 np0005580781 podman[160259]: 2026-01-10 17:08:43.183168489 +0000 UTC m=+0.165495655 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 10 12:08:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:08:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:08:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:48 np0005580781 podman[160414]: 2026-01-10 17:08:48.07078652 +0000 UTC m=+0.074597347 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 10 12:08:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:48 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 12:08:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:08:48.908 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:08:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:08:48.910 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:08:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:08:48.910 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:08:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:08:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:08:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:03 np0005580781 kernel: SELinux:  Converting 2769 SID table entries...
Jan 10 12:09:03 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 12:09:03 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 12:09:03 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 12:09:03 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 12:09:03 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 12:09:03 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 12:09:03 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 12:09:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:09:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:09:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:09:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:09:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:09:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:09:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:12 np0005580781 kernel: SELinux:  Converting 2769 SID table entries...
Jan 10 12:09:12 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 12:09:12 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 12:09:12 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 12:09:12 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 12:09:12 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 12:09:12 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 12:09:12 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 12:09:13 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:14 np0005580781 podman[160458]: 2026-01-10 17:09:14.18342087 +0000 UTC m=+0.154827875 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 10 12:09:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.876863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954877092, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1508, "num_deletes": 251, "total_data_size": 1663798, "memory_usage": 1708256, "flush_reason": "Manual Compaction"}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954895167, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1621522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7610, "largest_seqno": 9117, "table_properties": {"data_size": 1614560, "index_size": 4037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13656, "raw_average_key_size": 18, "raw_value_size": 1600653, "raw_average_value_size": 2226, "num_data_blocks": 190, "num_entries": 719, "num_filter_entries": 719, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064789, "oldest_key_time": 1768064789, "file_creation_time": 1768064954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 18360 microseconds, and 11183 cpu microseconds.
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.895243) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1621522 bytes OK
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.895285) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.897108) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.897129) EVENT_LOG_v1 {"time_micros": 1768064954897123, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.897158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1657239, prev total WAL file size 1657239, number of live WAL files 2.
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.898608) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1583KB)], [23(4492KB)]
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954898829, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 6221897, "oldest_snapshot_seqno": -1}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2840 keys, 4939457 bytes, temperature: kUnknown
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954956194, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4939457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4917720, "index_size": 13564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7109, "raw_key_size": 65944, "raw_average_key_size": 23, "raw_value_size": 4864013, "raw_average_value_size": 1712, "num_data_blocks": 606, "num_entries": 2840, "num_filter_entries": 2840, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768064954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.956840) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4939457 bytes
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.959670) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.9 rd, 85.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 4.4 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(6.9) write-amplify(3.0) OK, records in: 3354, records dropped: 514 output_compression: NoCompression
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.959748) EVENT_LOG_v1 {"time_micros": 1768064954959690, "job": 8, "event": "compaction_finished", "compaction_time_micros": 57657, "compaction_time_cpu_micros": 38124, "output_level": 6, "num_output_files": 1, "total_output_size": 4939457, "num_input_records": 3354, "num_output_records": 2840, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954960330, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768064954961365, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.898285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:09:14 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:09:14.961638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:09:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:19 np0005580781 podman[160483]: 2026-01-10 17:09:19.106095086 +0000 UTC m=+0.088475149 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 10 12:09:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:09:22 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:09:22 np0005580781 podman[160648]: 2026-01-10 17:09:22.67894171 +0000 UTC m=+0.049406597 container create 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:09:22 np0005580781 systemd[1]: Started libpod-conmon-4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be.scope.
Jan 10 12:09:22 np0005580781 podman[160648]: 2026-01-10 17:09:22.660018887 +0000 UTC m=+0.030483794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:09:22 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:09:22 np0005580781 podman[160648]: 2026-01-10 17:09:22.779682539 +0000 UTC m=+0.150147476 container init 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 12:09:22 np0005580781 podman[160648]: 2026-01-10 17:09:22.791734674 +0000 UTC m=+0.162199561 container start 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 12:09:22 np0005580781 podman[160648]: 2026-01-10 17:09:22.795994023 +0000 UTC m=+0.166458950 container attach 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 12:09:22 np0005580781 modest_einstein[160665]: 167 167
Jan 10 12:09:22 np0005580781 systemd[1]: libpod-4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be.scope: Deactivated successfully.
Jan 10 12:09:22 np0005580781 podman[160648]: 2026-01-10 17:09:22.801345565 +0000 UTC m=+0.171810492 container died 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 12:09:22 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2cb938a9f521c2b516e0e28777277f1e9be39bc85e255af12e89e26e340b2061-merged.mount: Deactivated successfully.
Jan 10 12:09:22 np0005580781 podman[160648]: 2026-01-10 17:09:22.860895397 +0000 UTC m=+0.231360284 container remove 4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_einstein, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:09:22 np0005580781 systemd[1]: libpod-conmon-4a139dde07acc18b54c33aa247a35a3c949bf3f32f217ebb0e2e4b01320fb0be.scope: Deactivated successfully.
Jan 10 12:09:23 np0005580781 podman[160689]: 2026-01-10 17:09:23.118045511 +0000 UTC m=+0.075139896 container create 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:09:23 np0005580781 systemd[1]: Started libpod-conmon-842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603.scope.
Jan 10 12:09:23 np0005580781 podman[160689]: 2026-01-10 17:09:23.088934529 +0000 UTC m=+0.046028984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:09:23 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:09:23 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:23 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:23 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:23 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:23 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:23 np0005580781 podman[160689]: 2026-01-10 17:09:23.25379906 +0000 UTC m=+0.210893515 container init 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:09:23 np0005580781 podman[160689]: 2026-01-10 17:09:23.265232006 +0000 UTC m=+0.222326421 container start 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 12:09:23 np0005580781 podman[160689]: 2026-01-10 17:09:23.270479034 +0000 UTC m=+0.227573499 container attach 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:09:23 np0005580781 vigilant_dirac[160705]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:09:23 np0005580781 vigilant_dirac[160705]: --> All data devices are unavailable
Jan 10 12:09:23 np0005580781 systemd[1]: libpod-842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603.scope: Deactivated successfully.
Jan 10 12:09:23 np0005580781 podman[160689]: 2026-01-10 17:09:23.87956123 +0000 UTC m=+0.836655625 container died 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:09:23 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a7c8594d13cc0c51d5de5363947b124418e9dcfa4f63548777450d1885dba502-merged.mount: Deactivated successfully.
Jan 10 12:09:23 np0005580781 podman[160689]: 2026-01-10 17:09:23.962011046 +0000 UTC m=+0.919105431 container remove 842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:09:23 np0005580781 systemd[1]: libpod-conmon-842efcc283907e52e21e23b875f220090579fd201c799233bdd2b167b2b9c603.scope: Deactivated successfully.
Jan 10 12:09:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:24 np0005580781 podman[160798]: 2026-01-10 17:09:24.587116006 +0000 UTC m=+0.071552777 container create 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 12:09:24 np0005580781 systemd[1]: Started libpod-conmon-1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a.scope.
Jan 10 12:09:24 np0005580781 podman[160798]: 2026-01-10 17:09:24.547315051 +0000 UTC m=+0.031751832 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:09:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:09:24 np0005580781 podman[160798]: 2026-01-10 17:09:24.705667064 +0000 UTC m=+0.190103875 container init 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 12:09:24 np0005580781 podman[160798]: 2026-01-10 17:09:24.712336966 +0000 UTC m=+0.196773717 container start 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:09:24 np0005580781 quizzical_wu[160814]: 167 167
Jan 10 12:09:24 np0005580781 systemd[1]: libpod-1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a.scope: Deactivated successfully.
Jan 10 12:09:24 np0005580781 podman[160798]: 2026-01-10 17:09:24.718303247 +0000 UTC m=+0.202740078 container attach 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:09:24 np0005580781 podman[160798]: 2026-01-10 17:09:24.719268446 +0000 UTC m=+0.203705227 container died 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:09:24 np0005580781 systemd[1]: var-lib-containers-storage-overlay-af768bd8d06d576d68877a5258bb6366f518122077251274b909b21db16e5592-merged.mount: Deactivated successfully.
Jan 10 12:09:24 np0005580781 podman[160798]: 2026-01-10 17:09:24.773159097 +0000 UTC m=+0.257595878 container remove 1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wu, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:09:24 np0005580781 systemd[1]: libpod-conmon-1789f5875dcc298bc8a8b97eb9809cf8dff7a24270542fd6fba8bd142e6bb17a.scope: Deactivated successfully.
Jan 10 12:09:25 np0005580781 podman[160838]: 2026-01-10 17:09:25.002948992 +0000 UTC m=+0.077267199 container create f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:09:25 np0005580781 systemd[1]: Started libpod-conmon-f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088.scope.
Jan 10 12:09:25 np0005580781 podman[160838]: 2026-01-10 17:09:24.968649054 +0000 UTC m=+0.042967291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:09:25 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:09:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:25 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:25 np0005580781 podman[160838]: 2026-01-10 17:09:25.092622147 +0000 UTC m=+0.166940374 container init f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:09:25 np0005580781 podman[160838]: 2026-01-10 17:09:25.104383353 +0000 UTC m=+0.178701560 container start f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 12:09:25 np0005580781 podman[160838]: 2026-01-10 17:09:25.107877818 +0000 UTC m=+0.182196095 container attach f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]: {
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:    "0": [
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:        {
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "devices": [
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "/dev/loop3"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            ],
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_name": "ceph_lv0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_size": "21470642176",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "name": "ceph_lv0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "tags": {
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cluster_name": "ceph",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.crush_device_class": "",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.encrypted": "0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.objectstore": "bluestore",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osd_id": "0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.type": "block",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.vdo": "0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.with_tpm": "0"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            },
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "type": "block",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "vg_name": "ceph_vg0"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:        }
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:    ],
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:    "1": [
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:        {
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "devices": [
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "/dev/loop4"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            ],
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_name": "ceph_lv1",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_size": "21470642176",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "name": "ceph_lv1",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "tags": {
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cluster_name": "ceph",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.crush_device_class": "",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.encrypted": "0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.objectstore": "bluestore",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osd_id": "1",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.type": "block",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.vdo": "0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.with_tpm": "0"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            },
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "type": "block",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "vg_name": "ceph_vg1"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:        }
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:    ],
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:    "2": [
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:        {
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "devices": [
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "/dev/loop5"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            ],
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_name": "ceph_lv2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_size": "21470642176",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "name": "ceph_lv2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "tags": {
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.cluster_name": "ceph",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.crush_device_class": "",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.encrypted": "0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.objectstore": "bluestore",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osd_id": "2",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.type": "block",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.vdo": "0",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:                "ceph.with_tpm": "0"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            },
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "type": "block",
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:            "vg_name": "ceph_vg2"
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:        }
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]:    ]
Jan 10 12:09:25 np0005580781 elegant_kalam[160910]: }
Jan 10 12:09:25 np0005580781 systemd[1]: libpod-f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088.scope: Deactivated successfully.
Jan 10 12:09:25 np0005580781 podman[160838]: 2026-01-10 17:09:25.4855539 +0000 UTC m=+0.559872107 container died f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:09:25 np0005580781 systemd[1]: var-lib-containers-storage-overlay-515d40665d1aa293b52bd909a83ae181dcbee97390baf69ac1e9de4a0cd13b00-merged.mount: Deactivated successfully.
Jan 10 12:09:25 np0005580781 podman[160838]: 2026-01-10 17:09:25.538577495 +0000 UTC m=+0.612895702 container remove f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 12:09:25 np0005580781 systemd[1]: libpod-conmon-f6f28e666f15392b219c068d85146774647d8fa1ab30ef2370107c713cacb088.scope: Deactivated successfully.
Jan 10 12:09:26 np0005580781 podman[161446]: 2026-01-10 17:09:25.999533947 +0000 UTC m=+0.040495616 container create 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 10 12:09:26 np0005580781 systemd[1]: Started libpod-conmon-16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e.scope.
Jan 10 12:09:26 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:09:26 np0005580781 podman[161446]: 2026-01-10 17:09:26.067270148 +0000 UTC m=+0.108231797 container init 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 10 12:09:26 np0005580781 podman[161446]: 2026-01-10 17:09:26.074886148 +0000 UTC m=+0.115847807 container start 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 12:09:26 np0005580781 podman[161446]: 2026-01-10 17:09:25.979629995 +0000 UTC m=+0.020591694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:09:26 np0005580781 happy_diffie[161517]: 167 167
Jan 10 12:09:26 np0005580781 podman[161446]: 2026-01-10 17:09:26.078336083 +0000 UTC m=+0.119297752 container attach 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 12:09:26 np0005580781 systemd[1]: libpod-16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e.scope: Deactivated successfully.
Jan 10 12:09:26 np0005580781 podman[161446]: 2026-01-10 17:09:26.079175268 +0000 UTC m=+0.120136957 container died 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 12:09:26 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ea6023990152908865953e6ccf0a149252e22ada399c7bcd0daad3f906e79879-merged.mount: Deactivated successfully.
Jan 10 12:09:26 np0005580781 podman[161446]: 2026-01-10 17:09:26.122782668 +0000 UTC m=+0.163744317 container remove 16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:09:26 np0005580781 systemd[1]: libpod-conmon-16b0862eae473dedcead60796275d0a926bbf656a0f7e704c3c6071391002b2e.scope: Deactivated successfully.
Jan 10 12:09:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:26 np0005580781 podman[161676]: 2026-01-10 17:09:26.369018881 +0000 UTC m=+0.051547711 container create c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:09:26 np0005580781 systemd[1]: Started libpod-conmon-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope.
Jan 10 12:09:26 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:09:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:09:26 np0005580781 podman[161676]: 2026-01-10 17:09:26.343008364 +0000 UTC m=+0.025537204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:09:26 np0005580781 podman[161676]: 2026-01-10 17:09:26.455017224 +0000 UTC m=+0.137546124 container init c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 12:09:26 np0005580781 podman[161676]: 2026-01-10 17:09:26.468514333 +0000 UTC m=+0.151043163 container start c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 12:09:26 np0005580781 podman[161676]: 2026-01-10 17:09:26.472686759 +0000 UTC m=+0.155215679 container attach c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:09:27 np0005580781 lvm[162293]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:09:27 np0005580781 lvm[162291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:09:27 np0005580781 lvm[162291]: VG ceph_vg0 finished
Jan 10 12:09:27 np0005580781 lvm[162293]: VG ceph_vg1 finished
Jan 10 12:09:27 np0005580781 lvm[162298]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:09:27 np0005580781 lvm[162298]: VG ceph_vg2 finished
Jan 10 12:09:27 np0005580781 lvm[162343]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:09:27 np0005580781 lvm[162343]: VG ceph_vg2 finished
Jan 10 12:09:27 np0005580781 interesting_thompson[161769]: {}
Jan 10 12:09:27 np0005580781 systemd[1]: libpod-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope: Deactivated successfully.
Jan 10 12:09:27 np0005580781 podman[161676]: 2026-01-10 17:09:27.326217524 +0000 UTC m=+1.008746344 container died c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:09:27 np0005580781 systemd[1]: libpod-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope: Consumed 1.365s CPU time.
Jan 10 12:09:27 np0005580781 systemd[1]: var-lib-containers-storage-overlay-827601ea8636821a0ddabe8f9b3dc8b0f051bddbb1e3be53f930001bde03f718-merged.mount: Deactivated successfully.
Jan 10 12:09:27 np0005580781 podman[161676]: 2026-01-10 17:09:27.3799252 +0000 UTC m=+1.062454030 container remove c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_thompson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 10 12:09:27 np0005580781 systemd[1]: libpod-conmon-c5b64850e7339a807e57fdc7a6b8fe7f42c0a1667704a8e07d6774a47594c6c3.scope: Deactivated successfully.
Jan 10 12:09:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:09:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:09:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:09:27 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:09:27 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:09:27 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:09:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:09:38
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', 'cephfs.cephfs.data']
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:09:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:09:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:09:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:09:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:09:45 np0005580781 podman[170688]: 2026-01-10 17:09:45.142383684 +0000 UTC m=+0.130705777 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 10 12:09:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:09:48.911 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:09:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:09:48.914 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:09:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:09:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:09:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:50 np0005580781 podman[172895]: 2026-01-10 17:09:50.045410859 +0000 UTC m=+0.051419238 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 10 12:09:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:09:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:09:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:10:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:10:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:10:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:10:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:10:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:10:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:13 np0005580781 kernel: SELinux:  Converting 2770 SID table entries...
Jan 10 12:10:13 np0005580781 kernel: SELinux:  policy capability network_peer_controls=1
Jan 10 12:10:13 np0005580781 kernel: SELinux:  policy capability open_perms=1
Jan 10 12:10:13 np0005580781 kernel: SELinux:  policy capability extended_socket_class=1
Jan 10 12:10:13 np0005580781 kernel: SELinux:  policy capability always_check_network=0
Jan 10 12:10:13 np0005580781 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 10 12:10:13 np0005580781 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 10 12:10:13 np0005580781 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 10 12:10:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:14 np0005580781 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 12:10:14 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 10 12:10:14 np0005580781 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 10 12:10:16 np0005580781 podman[178049]: 2026-01-10 17:10:16.171396941 +0000 UTC m=+0.145575297 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 10 12:10:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:20 np0005580781 podman[178281]: 2026-01-10 17:10:20.809047432 +0000 UTC m=+0.080930057 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 10 12:10:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:24 np0005580781 systemd[1]: Stopping OpenSSH server daemon...
Jan 10 12:10:24 np0005580781 systemd[1]: sshd.service: Deactivated successfully.
Jan 10 12:10:24 np0005580781 systemd[1]: Stopped OpenSSH server daemon.
Jan 10 12:10:24 np0005580781 systemd[1]: sshd.service: Consumed 4.907s CPU time, read 564.0K from disk, written 28.0K to disk.
Jan 10 12:10:24 np0005580781 systemd[1]: Stopped target sshd-keygen.target.
Jan 10 12:10:24 np0005580781 systemd[1]: Stopping sshd-keygen.target...
Jan 10 12:10:24 np0005580781 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 12:10:24 np0005580781 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 12:10:24 np0005580781 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 10 12:10:24 np0005580781 systemd[1]: Reached target sshd-keygen.target.
Jan 10 12:10:24 np0005580781 systemd[1]: Starting OpenSSH server daemon...
Jan 10 12:10:24 np0005580781 systemd[1]: Started OpenSSH server daemon.
Jan 10 12:10:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:27 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 12:10:27 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 12:10:27 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:27 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:27 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:27 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 12:10:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:29 np0005580781 podman[180147]: 2026-01-10 17:10:29.562533842 +0000 UTC m=+0.871541113 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 12:10:29 np0005580781 podman[180147]: 2026-01-10 17:10:29.707682121 +0000 UTC m=+1.016689372 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:10:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:10:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:10:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:10:31 np0005580781 python3.9[182695]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:10:31 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:31 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:31 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:31 np0005580781 podman[183170]: 2026-01-10 17:10:31.754544336 +0000 UTC m=+0.071146790 container create c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:10:31 np0005580781 podman[183170]: 2026-01-10 17:10:31.721526289 +0000 UTC m=+0.038128773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:10:31 np0005580781 systemd[1]: Started libpod-conmon-c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072.scope.
Jan 10 12:10:31 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:10:32 np0005580781 podman[183170]: 2026-01-10 17:10:32.00910688 +0000 UTC m=+0.325709364 container init c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:10:32 np0005580781 podman[183170]: 2026-01-10 17:10:32.018507846 +0000 UTC m=+0.335110320 container start c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 10 12:10:32 np0005580781 podman[183170]: 2026-01-10 17:10:32.02284392 +0000 UTC m=+0.339446424 container attach c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 12:10:32 np0005580781 eager_franklin[183451]: 167 167
Jan 10 12:10:32 np0005580781 systemd[1]: libpod-c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072.scope: Deactivated successfully.
Jan 10 12:10:32 np0005580781 podman[183170]: 2026-01-10 17:10:32.026834033 +0000 UTC m=+0.343436557 container died c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 10 12:10:32 np0005580781 systemd[1]: var-lib-containers-storage-overlay-34d36906dee0b14a2834cf381c242f1de5ce707410bef75d3f8a141f7641eba0-merged.mount: Deactivated successfully.
Jan 10 12:10:32 np0005580781 podman[183170]: 2026-01-10 17:10:32.079913839 +0000 UTC m=+0.396516293 container remove c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:10:32 np0005580781 systemd[1]: libpod-conmon-c036a71f5edf3060507e789ef9603cf2381269aa92aecf97a56773c98230e072.scope: Deactivated successfully.
Jan 10 12:10:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:32 np0005580781 podman[183767]: 2026-01-10 17:10:32.285544713 +0000 UTC m=+0.053560381 container create 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:10:32 np0005580781 systemd[1]: Started libpod-conmon-04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082.scope.
Jan 10 12:10:32 np0005580781 podman[183767]: 2026-01-10 17:10:32.260289817 +0000 UTC m=+0.028305495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:10:32 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:10:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:32 np0005580781 podman[183767]: 2026-01-10 17:10:32.3802338 +0000 UTC m=+0.148249508 container init 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:10:32 np0005580781 podman[183767]: 2026-01-10 17:10:32.38868439 +0000 UTC m=+0.156700088 container start 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 12:10:32 np0005580781 podman[183767]: 2026-01-10 17:10:32.392995812 +0000 UTC m=+0.161011480 container attach 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:10:32 np0005580781 python3.9[184132]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:10:32 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:32 np0005580781 brave_faraday[183942]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:10:32 np0005580781 brave_faraday[183942]: --> All data devices are unavailable
Jan 10 12:10:32 np0005580781 podman[183767]: 2026-01-10 17:10:32.930514026 +0000 UTC m=+0.698529704 container died 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 10 12:10:32 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:32 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:33 np0005580781 systemd[1]: libpod-04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082.scope: Deactivated successfully.
Jan 10 12:10:33 np0005580781 systemd[1]: var-lib-containers-storage-overlay-04a55dd953155c7e5424dc2dedeee5f85002e556834a286292ed4fe28f6487e5-merged.mount: Deactivated successfully.
Jan 10 12:10:33 np0005580781 podman[183767]: 2026-01-10 17:10:33.176313991 +0000 UTC m=+0.944329689 container remove 04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_faraday, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 12:10:33 np0005580781 systemd[1]: libpod-conmon-04cf323b205a08c2d59f5ddb986b489de577e4b20fb0fb24a967b182dda53082.scope: Deactivated successfully.
Jan 10 12:10:33 np0005580781 podman[185365]: 2026-01-10 17:10:33.724284621 +0000 UTC m=+0.063480632 container create aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:10:33 np0005580781 systemd[1]: Started libpod-conmon-aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348.scope.
Jan 10 12:10:33 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:10:33 np0005580781 podman[185365]: 2026-01-10 17:10:33.696937015 +0000 UTC m=+0.036133036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:10:33 np0005580781 podman[185365]: 2026-01-10 17:10:33.81655612 +0000 UTC m=+0.155752121 container init aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 12:10:33 np0005580781 podman[185365]: 2026-01-10 17:10:33.8246829 +0000 UTC m=+0.163878901 container start aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 12:10:33 np0005580781 podman[185365]: 2026-01-10 17:10:33.828975062 +0000 UTC m=+0.168171103 container attach aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:10:33 np0005580781 busy_keldysh[185490]: 167 167
Jan 10 12:10:33 np0005580781 systemd[1]: libpod-aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348.scope: Deactivated successfully.
Jan 10 12:10:33 np0005580781 podman[185365]: 2026-01-10 17:10:33.83278212 +0000 UTC m=+0.171978091 container died aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 12:10:33 np0005580781 systemd[1]: var-lib-containers-storage-overlay-aef8e8aacc099adc0655ad65a161ab96f1a445326e7598f6c596a1621d9e414d-merged.mount: Deactivated successfully.
Jan 10 12:10:33 np0005580781 podman[185365]: 2026-01-10 17:10:33.888253744 +0000 UTC m=+0.227449755 container remove aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:10:33 np0005580781 systemd[1]: libpod-conmon-aed072865a6b1ea879be3be1881f06eb8366734c085bd668addadcd1fbf8c348.scope: Deactivated successfully.
Jan 10 12:10:33 np0005580781 python3.9[185443]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:10:34 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:34 np0005580781 podman[185703]: 2026-01-10 17:10:34.083094853 +0000 UTC m=+0.063449671 container create e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 12:10:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:34 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:34 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:34 np0005580781 podman[185703]: 2026-01-10 17:10:34.063003663 +0000 UTC m=+0.043358491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:10:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:34 np0005580781 systemd[1]: Started libpod-conmon-e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb.scope.
Jan 10 12:10:34 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:10:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:34 np0005580781 podman[185703]: 2026-01-10 17:10:34.484557036 +0000 UTC m=+0.464911894 container init e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:10:34 np0005580781 podman[185703]: 2026-01-10 17:10:34.498642255 +0000 UTC m=+0.478997073 container start e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 12:10:34 np0005580781 podman[185703]: 2026-01-10 17:10:34.502710691 +0000 UTC m=+0.483065529 container attach e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 12:10:34 np0005580781 eager_snyder[186059]: {
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:    "0": [
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:        {
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "devices": [
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "/dev/loop3"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            ],
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_name": "ceph_lv0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_size": "21470642176",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "name": "ceph_lv0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "tags": {
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cluster_name": "ceph",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.crush_device_class": "",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.encrypted": "0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.objectstore": "bluestore",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osd_id": "0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.type": "block",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.vdo": "0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.with_tpm": "0"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            },
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "type": "block",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "vg_name": "ceph_vg0"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:        }
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:    ],
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:    "1": [
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:        {
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "devices": [
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "/dev/loop4"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            ],
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_name": "ceph_lv1",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_size": "21470642176",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "name": "ceph_lv1",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "tags": {
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cluster_name": "ceph",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.crush_device_class": "",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.encrypted": "0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.objectstore": "bluestore",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osd_id": "1",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.type": "block",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.vdo": "0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.with_tpm": "0"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            },
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "type": "block",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "vg_name": "ceph_vg1"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:        }
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:    ],
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:    "2": [
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:        {
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "devices": [
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "/dev/loop5"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            ],
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_name": "ceph_lv2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_size": "21470642176",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "name": "ceph_lv2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "tags": {
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.cluster_name": "ceph",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.crush_device_class": "",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.encrypted": "0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.objectstore": "bluestore",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osd_id": "2",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.type": "block",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.vdo": "0",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:                "ceph.with_tpm": "0"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            },
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "type": "block",
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:            "vg_name": "ceph_vg2"
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:        }
Jan 10 12:10:34 np0005580781 eager_snyder[186059]:    ]
Jan 10 12:10:34 np0005580781 eager_snyder[186059]: }
Jan 10 12:10:34 np0005580781 systemd[1]: libpod-e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb.scope: Deactivated successfully.
Jan 10 12:10:34 np0005580781 podman[185703]: 2026-01-10 17:10:34.827553169 +0000 UTC m=+0.807908037 container died e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 12:10:34 np0005580781 systemd[1]: var-lib-containers-storage-overlay-051dc749b002bc16f7dc5e83d4df41a9839bcc6f8cb58ea2fa122fa3774ed6ef-merged.mount: Deactivated successfully.
Jan 10 12:10:34 np0005580781 podman[185703]: 2026-01-10 17:10:34.881407277 +0000 UTC m=+0.861762095 container remove e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_snyder, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 12:10:34 np0005580781 systemd[1]: libpod-conmon-e46bf49d2ada4c65184f885aace18a29a1b5161084c18f132e1470b43b147acb.scope: Deactivated successfully.
Jan 10 12:10:35 np0005580781 python3.9[186666]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:10:35 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:35 np0005580781 podman[187005]: 2026-01-10 17:10:35.408928507 +0000 UTC m=+0.045149752 container create a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:10:35 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:35 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:35 np0005580781 podman[187005]: 2026-01-10 17:10:35.389254899 +0000 UTC m=+0.025476154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:10:35 np0005580781 systemd[1]: Started libpod-conmon-a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a.scope.
Jan 10 12:10:35 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:10:35 np0005580781 podman[187005]: 2026-01-10 17:10:35.723685688 +0000 UTC m=+0.359907023 container init a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 12:10:35 np0005580781 podman[187005]: 2026-01-10 17:10:35.732295073 +0000 UTC m=+0.368516308 container start a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 10 12:10:35 np0005580781 vigilant_dirac[187389]: 167 167
Jan 10 12:10:35 np0005580781 systemd[1]: libpod-a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a.scope: Deactivated successfully.
Jan 10 12:10:35 np0005580781 podman[187005]: 2026-01-10 17:10:35.768334765 +0000 UTC m=+0.404556000 container attach a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 12:10:35 np0005580781 podman[187005]: 2026-01-10 17:10:35.76955563 +0000 UTC m=+0.405776885 container died a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:10:35 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2bac474c00fbbc4397d63901a523d8dcb8f7dc01fdb0182025c38e12ee30047d-merged.mount: Deactivated successfully.
Jan 10 12:10:35 np0005580781 podman[187005]: 2026-01-10 17:10:35.888333561 +0000 UTC m=+0.524554806 container remove a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_dirac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:10:35 np0005580781 systemd[1]: libpod-conmon-a7d436eb8c65983c543dd45362d9fa954c587cef9f39e69cde16b0803e859d6a.scope: Deactivated successfully.
Jan 10 12:10:36 np0005580781 podman[187823]: 2026-01-10 17:10:36.091913638 +0000 UTC m=+0.063940576 container create 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:10:36 np0005580781 systemd[1]: Started libpod-conmon-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope.
Jan 10 12:10:36 np0005580781 podman[187823]: 2026-01-10 17:10:36.071675353 +0000 UTC m=+0.043702261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:10:36 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:10:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:10:36 np0005580781 podman[187823]: 2026-01-10 17:10:36.19453953 +0000 UTC m=+0.166566438 container init 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 12:10:36 np0005580781 podman[187823]: 2026-01-10 17:10:36.20758485 +0000 UTC m=+0.179611748 container start 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 12:10:36 np0005580781 podman[187823]: 2026-01-10 17:10:36.21147037 +0000 UTC m=+0.183497268 container attach 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:10:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:36 np0005580781 python3.9[188153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:36 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:36 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:36 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:36 np0005580781 lvm[188842]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:10:36 np0005580781 lvm[188842]: VG ceph_vg0 finished
Jan 10 12:10:36 np0005580781 bold_fermat[188051]: {}
Jan 10 12:10:36 np0005580781 lvm[188854]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:10:36 np0005580781 lvm[188854]: VG ceph_vg1 finished
Jan 10 12:10:36 np0005580781 lvm[188859]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:10:36 np0005580781 lvm[188859]: VG ceph_vg2 finished
Jan 10 12:10:36 np0005580781 systemd[1]: libpod-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope: Deactivated successfully.
Jan 10 12:10:36 np0005580781 conmon[188051]: conmon 7842f8e45d7aabdca053 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope/container/memory.events
Jan 10 12:10:36 np0005580781 systemd[1]: libpod-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope: Consumed 1.272s CPU time.
Jan 10 12:10:36 np0005580781 podman[187823]: 2026-01-10 17:10:36.992151454 +0000 UTC m=+0.964178352 container died 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:10:37 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e03a3c65f34a40c92acbbb8dbe09931fc4af0c142769c385555968e66cef2572-merged.mount: Deactivated successfully.
Jan 10 12:10:37 np0005580781 podman[187823]: 2026-01-10 17:10:37.04662614 +0000 UTC m=+1.018653038 container remove 7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_fermat, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 10 12:10:37 np0005580781 systemd[1]: libpod-conmon-7842f8e45d7aabdca053302e58c596317d7ce809f049668d0c7d91feeacd10b6.scope: Deactivated successfully.
Jan 10 12:10:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:10:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:10:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:37 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:37 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:10:37 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 12:10:37 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 12:10:37 np0005580781 systemd[1]: man-db-cache-update.service: Consumed 12.381s CPU time.
Jan 10 12:10:37 np0005580781 systemd[1]: run-r835e44116d024a76a38504c44c3a9875.service: Deactivated successfully.
Jan 10 12:10:37 np0005580781 python3.9[189439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:37 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:37 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:37 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:10:38
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'backups', 'vms']
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:10:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:10:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:10:39 np0005580781 python3.9[189702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:39 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:39 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:39 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:40 np0005580781 python3.9[189892]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:41 np0005580781 python3.9[190047]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:41 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:41 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:41 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:42 np0005580781 python3.9[190237]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 10 12:10:42 np0005580781 systemd[1]: Reloading.
Jan 10 12:10:42 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:10:42 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:10:43 np0005580781 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 10 12:10:43 np0005580781 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 10 12:10:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:44 np0005580781 python3.9[190430]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:10:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:10:45 np0005580781 python3.9[190585]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:46 np0005580781 python3.9[190740]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:46 np0005580781 podman[190742]: 2026-01-10 17:10:46.426678971 +0000 UTC m=+0.125078301 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 12:10:47 np0005580781 python3.9[190921]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:48 np0005580781 python3.9[191076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:10:48.911 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:10:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:10:48.912 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:10:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:10:48.912 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:10:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:49 np0005580781 python3.9[191233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:50 np0005580781 python3.9[191388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:50 np0005580781 podman[191515]: 2026-01-10 17:10:50.97467993 +0000 UTC m=+0.073912809 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 12:10:51 np0005580781 python3.9[191562]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:52 np0005580781 python3.9[191717]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:53 np0005580781 python3.9[191872]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:54 np0005580781 python3.9[192027]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:55 np0005580781 python3.9[192182]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:56 np0005580781 python3.9[192337]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:57 np0005580781 python3.9[192492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 10 12:10:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:10:58 np0005580781 python3.9[192647]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:10:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:10:59 np0005580781 python3.9[192799]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:11:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:00 np0005580781 python3.9[192951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:11:01 np0005580781 python3.9[193103]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:11:01 np0005580781 python3.9[193255]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:11:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:02 np0005580781 python3.9[193407]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:11:03 np0005580781 python3.9[193559]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:04 np0005580781 python3.9[193684]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065062.964221-549-125725843356237/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:05 np0005580781 python3.9[193836]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:06 np0005580781 python3.9[193961]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065064.8034894-549-280104799303355/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:06 np0005580781 python3.9[194113]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:07 np0005580781 python3.9[194238]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065066.2994268-549-60611555623907/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:08 np0005580781 python3.9[194390]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:11:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:11:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:11:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:11:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:11:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:11:09 np0005580781 python3.9[194515]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065067.755869-549-254126462766809/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:09 np0005580781 python3.9[194667]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:10 np0005580781 python3.9[194792]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065069.3454819-549-115994752453085/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:11 np0005580781 python3.9[194944]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:12 np0005580781 python3.9[195069]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065070.8902962-549-280479923475804/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:13 np0005580781 python3.9[195221]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:13 np0005580781 python3.9[195344]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065072.391514-549-252219836644825/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:14 np0005580781 python3.9[195496]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:15 np0005580781 python3.9[195621]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768065073.9724784-549-39402097765111/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:15 np0005580781 python3.9[195773]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 10 12:11:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:16 np0005580781 podman[195898]: 2026-01-10 17:11:16.867988011 +0000 UTC m=+0.189173069 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 10 12:11:16 np0005580781 python3.9[195943]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:17 np0005580781 python3.9[196102]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:18 np0005580781 python3.9[196254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:19 np0005580781 python3.9[196406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:20 np0005580781 python3.9[196558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:20 np0005580781 python3.9[196710]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:21 np0005580781 podman[196834]: 2026-01-10 17:11:21.435804964 +0000 UTC m=+0.095208993 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 10 12:11:21 np0005580781 python3.9[196881]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:22 np0005580781 python3.9[197033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:23 np0005580781 python3.9[197185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:23 np0005580781 python3.9[197337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:24 np0005580781 python3.9[197489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:25 np0005580781 python3.9[197641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:26 np0005580781 python3.9[197793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:26 np0005580781 python3.9[197945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:27 np0005580781 python3.9[198097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:28 np0005580781 python3.9[198220]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065087.000799-770-85275796795406/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:29 np0005580781 python3.9[198372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:29 np0005580781 python3.9[198495]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065088.4722717-770-132047822032570/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:30 np0005580781 python3.9[198647]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:31 np0005580781 python3.9[198770]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065089.9553614-770-250132790497331/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:32 np0005580781 python3.9[198922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:32 np0005580781 python3.9[199045]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065091.4737563-770-172816879163191/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:33 np0005580781 python3.9[199197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:34 np0005580781 python3.9[199320]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065093.1165762-770-230128918859271/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:35 np0005580781 python3.9[199472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:36 np0005580781 python3.9[199595]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065094.6834524-770-102312907588834/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:36 np0005580781 python3.9[199747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:37 np0005580781 python3.9[199898]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065096.2127438-770-232869817582423/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:11:38
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'images', 'backups', 'cephfs.cephfs.data']
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:11:38 np0005580781 python3.9[200104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:11:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:11:38 np0005580781 podman[200237]: 2026-01-10 17:11:38.587569276 +0000 UTC m=+0.077560336 container create 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 12:11:38 np0005580781 systemd[1]: Started libpod-conmon-90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da.scope.
Jan 10 12:11:38 np0005580781 podman[200237]: 2026-01-10 17:11:38.554772705 +0000 UTC m=+0.044763785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:11:38 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:11:38 np0005580781 podman[200237]: 2026-01-10 17:11:38.706168788 +0000 UTC m=+0.196159878 container init 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 12:11:38 np0005580781 podman[200237]: 2026-01-10 17:11:38.719455629 +0000 UTC m=+0.209446669 container start 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 10 12:11:38 np0005580781 podman[200237]: 2026-01-10 17:11:38.724575416 +0000 UTC m=+0.214566456 container attach 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:11:38 np0005580781 eloquent_mclean[200277]: 167 167
Jan 10 12:11:38 np0005580781 systemd[1]: libpod-90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da.scope: Deactivated successfully.
Jan 10 12:11:38 np0005580781 podman[200237]: 2026-01-10 17:11:38.728926731 +0000 UTC m=+0.218917741 container died 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 12:11:38 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2773bebcaee26635972924452a7f742337b0c97ff138b834a823a0942bab93d3-merged.mount: Deactivated successfully.
Jan 10 12:11:38 np0005580781 podman[200237]: 2026-01-10 17:11:38.776933148 +0000 UTC m=+0.266924158 container remove 90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:11:38 np0005580781 systemd[1]: libpod-conmon-90f057f83eb8d3d6ee893ad97e4a481463639fadf6b4c1c002334e1cdfebf8da.scope: Deactivated successfully.
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:11:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:11:38 np0005580781 python3.9[200312]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065097.65904-770-197731767263542/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:39 np0005580781 podman[200328]: 2026-01-10 17:11:39.039065297 +0000 UTC m=+0.065749577 container create 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:11:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:11:39 np0005580781 systemd[1]: Started libpod-conmon-46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798.scope.
Jan 10 12:11:39 np0005580781 podman[200328]: 2026-01-10 17:11:39.009750276 +0000 UTC m=+0.036434626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:11:39 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:11:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:39 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:39 np0005580781 podman[200328]: 2026-01-10 17:11:39.149253918 +0000 UTC m=+0.175938258 container init 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:11:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:39 np0005580781 podman[200328]: 2026-01-10 17:11:39.163921709 +0000 UTC m=+0.190605999 container start 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 12:11:39 np0005580781 podman[200328]: 2026-01-10 17:11:39.169282593 +0000 UTC m=+0.195966883 container attach 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 12:11:39 np0005580781 infallible_mcclintock[200368]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:11:39 np0005580781 infallible_mcclintock[200368]: --> All data devices are unavailable
Jan 10 12:11:39 np0005580781 python3.9[200507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:39 np0005580781 systemd[1]: libpod-46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798.scope: Deactivated successfully.
Jan 10 12:11:39 np0005580781 podman[200328]: 2026-01-10 17:11:39.766330649 +0000 UTC m=+0.793014929 container died 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:11:39 np0005580781 systemd[1]: var-lib-containers-storage-overlay-d9753a717e14bf02cf9f48bfe75564743b7eeab2911b57ad90f8c8395e1bb539-merged.mount: Deactivated successfully.
Jan 10 12:11:39 np0005580781 podman[200328]: 2026-01-10 17:11:39.835219225 +0000 UTC m=+0.861903475 container remove 46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mcclintock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:11:39 np0005580781 systemd[1]: libpod-conmon-46e6ec422e44104248941f16dd00fe6ff10ca0d40c545260c45bcb4866154798.scope: Deactivated successfully.
Jan 10 12:11:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:40 np0005580781 python3.9[200700]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065099.1982095-770-147458411985776/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:40 np0005580781 podman[200713]: 2026-01-10 17:11:40.410573549 +0000 UTC m=+0.073650814 container create 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:11:40 np0005580781 auditd[702]: Audit daemon rotating log files
Jan 10 12:11:40 np0005580781 systemd[1]: Started libpod-conmon-923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219.scope.
Jan 10 12:11:40 np0005580781 podman[200713]: 2026-01-10 17:11:40.378656873 +0000 UTC m=+0.041734158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:11:40 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:11:40 np0005580781 podman[200713]: 2026-01-10 17:11:40.518270088 +0000 UTC m=+0.181347413 container init 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:11:40 np0005580781 podman[200713]: 2026-01-10 17:11:40.529259943 +0000 UTC m=+0.192337188 container start 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:11:40 np0005580781 podman[200713]: 2026-01-10 17:11:40.53298312 +0000 UTC m=+0.196060395 container attach 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 12:11:40 np0005580781 fervent_bell[200736]: 167 167
Jan 10 12:11:40 np0005580781 systemd[1]: libpod-923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219.scope: Deactivated successfully.
Jan 10 12:11:40 np0005580781 podman[200713]: 2026-01-10 17:11:40.537760397 +0000 UTC m=+0.200837662 container died 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:11:40 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b340413d28be8038ba683d8d5bf16cd03bfc9069d5318865fafaca02da68f09e-merged.mount: Deactivated successfully.
Jan 10 12:11:40 np0005580781 podman[200713]: 2026-01-10 17:11:40.594366321 +0000 UTC m=+0.257443596 container remove 923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 12:11:40 np0005580781 systemd[1]: libpod-conmon-923fb0b395789ad9bdb87369e159369d2be464e9d00feec853ace7cce34a8219.scope: Deactivated successfully.
Jan 10 12:11:40 np0005580781 podman[200829]: 2026-01-10 17:11:40.834589922 +0000 UTC m=+0.050126029 container create d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:11:40 np0005580781 systemd[1]: Started libpod-conmon-d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6.scope.
Jan 10 12:11:40 np0005580781 podman[200829]: 2026-01-10 17:11:40.81257337 +0000 UTC m=+0.028109457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:11:40 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:11:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:40 np0005580781 podman[200829]: 2026-01-10 17:11:40.954844842 +0000 UTC m=+0.170380929 container init d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 12:11:40 np0005580781 podman[200829]: 2026-01-10 17:11:40.962853421 +0000 UTC m=+0.178389538 container start d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:11:40 np0005580781 podman[200829]: 2026-01-10 17:11:40.967652679 +0000 UTC m=+0.183188776 container attach d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:11:41 np0005580781 python3.9[200922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]: {
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:    "0": [
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:        {
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "devices": [
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "/dev/loop3"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            ],
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_name": "ceph_lv0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_size": "21470642176",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "name": "ceph_lv0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "tags": {
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cluster_name": "ceph",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.crush_device_class": "",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.encrypted": "0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.objectstore": "bluestore",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osd_id": "0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.type": "block",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.vdo": "0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.with_tpm": "0"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            },
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "type": "block",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "vg_name": "ceph_vg0"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:        }
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:    ],
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:    "1": [
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:        {
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "devices": [
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "/dev/loop4"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            ],
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_name": "ceph_lv1",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_size": "21470642176",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "name": "ceph_lv1",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "tags": {
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cluster_name": "ceph",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.crush_device_class": "",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.encrypted": "0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.objectstore": "bluestore",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osd_id": "1",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.type": "block",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.vdo": "0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.with_tpm": "0"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            },
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "type": "block",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "vg_name": "ceph_vg1"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:        }
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:    ],
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:    "2": [
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:        {
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "devices": [
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "/dev/loop5"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            ],
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_name": "ceph_lv2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_size": "21470642176",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "name": "ceph_lv2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "tags": {
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.cluster_name": "ceph",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.crush_device_class": "",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.encrypted": "0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.objectstore": "bluestore",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osd_id": "2",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.type": "block",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.vdo": "0",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:                "ceph.with_tpm": "0"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            },
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "type": "block",
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:            "vg_name": "ceph_vg2"
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:        }
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]:    ]
Jan 10 12:11:41 np0005580781 beautiful_mahavira[200886]: }
Jan 10 12:11:41 np0005580781 systemd[1]: libpod-d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6.scope: Deactivated successfully.
Jan 10 12:11:41 np0005580781 podman[200829]: 2026-01-10 17:11:41.30653886 +0000 UTC m=+0.522074927 container died d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:11:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay-851b02ed48d195721f08c8e8eb5c2d9e922d0d6ced6b2979d4aa6f3364eaee7a-merged.mount: Deactivated successfully.
Jan 10 12:11:41 np0005580781 podman[200829]: 2026-01-10 17:11:41.352852429 +0000 UTC m=+0.568388496 container remove d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:11:41 np0005580781 systemd[1]: libpod-conmon-d1057a97de0ef2a3ffcf8b585af6afe64ff95df7540c283e776173334e8b8dd6.scope: Deactivated successfully.
Jan 10 12:11:41 np0005580781 python3.9[201110]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065100.6172652-770-258523656904194/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:41 np0005580781 podman[201123]: 2026-01-10 17:11:41.832006304 +0000 UTC m=+0.057000377 container create fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:11:41 np0005580781 systemd[1]: Started libpod-conmon-fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9.scope.
Jan 10 12:11:41 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:11:41 np0005580781 podman[201123]: 2026-01-10 17:11:41.816388166 +0000 UTC m=+0.041382209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:11:41 np0005580781 podman[201123]: 2026-01-10 17:11:41.92599958 +0000 UTC m=+0.150993633 container init fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:11:41 np0005580781 podman[201123]: 2026-01-10 17:11:41.936183822 +0000 UTC m=+0.161177845 container start fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 12:11:41 np0005580781 happy_gates[201160]: 167 167
Jan 10 12:11:41 np0005580781 podman[201123]: 2026-01-10 17:11:41.942011819 +0000 UTC m=+0.167005862 container attach fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:11:41 np0005580781 systemd[1]: libpod-fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9.scope: Deactivated successfully.
Jan 10 12:11:41 np0005580781 podman[201123]: 2026-01-10 17:11:41.943509642 +0000 UTC m=+0.168503665 container died fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:11:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay-09643f5472ecdff20441e64cba84c9202e7e059b4778127df19b32b392e7982e-merged.mount: Deactivated successfully.
Jan 10 12:11:41 np0005580781 podman[201123]: 2026-01-10 17:11:41.980330218 +0000 UTC m=+0.205324241 container remove fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:11:41 np0005580781 systemd[1]: libpod-conmon-fef76ad95a6897be16b648f543223f6fdfde38c0bebc9420b58768ef7f9a49b9.scope: Deactivated successfully.
Jan 10 12:11:42 np0005580781 podman[201242]: 2026-01-10 17:11:42.186918084 +0000 UTC m=+0.055656158 container create b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 12:11:42 np0005580781 systemd[1]: Started libpod-conmon-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope.
Jan 10 12:11:42 np0005580781 podman[201242]: 2026-01-10 17:11:42.158225351 +0000 UTC m=+0.026963464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:11:42 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:11:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:42 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:11:42 np0005580781 podman[201242]: 2026-01-10 17:11:42.301716418 +0000 UTC m=+0.170454531 container init b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:11:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:42 np0005580781 podman[201242]: 2026-01-10 17:11:42.314236097 +0000 UTC m=+0.182974200 container start b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:11:42 np0005580781 podman[201242]: 2026-01-10 17:11:42.318955542 +0000 UTC m=+0.187693625 container attach b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:11:42 np0005580781 python3.9[201334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:43 np0005580781 python3.9[201515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065101.9927835-770-146702629131011/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:43 np0005580781 lvm[201534]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:11:43 np0005580781 lvm[201534]: VG ceph_vg2 finished
Jan 10 12:11:43 np0005580781 lvm[201533]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:11:43 np0005580781 lvm[201533]: VG ceph_vg1 finished
Jan 10 12:11:43 np0005580781 lvm[201532]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:11:43 np0005580781 lvm[201532]: VG ceph_vg0 finished
Jan 10 12:11:43 np0005580781 unruffled_pike[201298]: {}
Jan 10 12:11:43 np0005580781 systemd[1]: libpod-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope: Deactivated successfully.
Jan 10 12:11:43 np0005580781 systemd[1]: libpod-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope: Consumed 1.419s CPU time.
Jan 10 12:11:43 np0005580781 podman[201562]: 2026-01-10 17:11:43.25733491 +0000 UTC m=+0.022980050 container died b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:11:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay-83d9b997710b2bffed4ddf87e0dd2f509d76b7fb1a41a43378a595dbc376f5fd-merged.mount: Deactivated successfully.
Jan 10 12:11:43 np0005580781 podman[201562]: 2026-01-10 17:11:43.297736149 +0000 UTC m=+0.063381289 container remove b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:11:43 np0005580781 systemd[1]: libpod-conmon-b172ed5c6ef5f7dc018a5c01a53e3671c3e18a3320222223df29061c28c8aa35.scope: Deactivated successfully.
Jan 10 12:11:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:11:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:11:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:11:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:11:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:11:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:11:43 np0005580781 python3.9[201727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:11:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:11:44 np0005580781 python3.9[201850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065103.296403-770-148352254771167/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:45 np0005580781 python3.9[202002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:46 np0005580781 python3.9[202125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065104.7805882-770-207836343549169/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:46 np0005580781 python3.9[202277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:11:47 np0005580781 podman[202278]: 2026-01-10 17:11:47.181105345 +0000 UTC m=+0.168834164 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 12:11:47 np0005580781 python3.9[202427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065106.3700457-770-278820222182827/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:48 np0005580781 python3.9[202577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:11:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:11:48.913 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:11:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:11:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:11:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:11:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:11:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:49 np0005580781 python3.9[202732]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 10 12:11:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:51 np0005580781 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 10 12:11:51 np0005580781 podman[202889]: 2026-01-10 17:11:51.635077049 +0000 UTC m=+0.092751972 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 10 12:11:51 np0005580781 python3.9[202890]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:52 np0005580781 python3.9[203062]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:53 np0005580781 python3.9[203214]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:54 np0005580781 python3.9[203366]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:54 np0005580781 python3.9[203518]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:55 np0005580781 python3.9[203670]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:56 np0005580781 python3.9[203822]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:57 np0005580781 python3.9[203974]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:57 np0005580781 python3.9[204126]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:11:58 np0005580781 python3.9[204280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:11:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:11:59 np0005580781 python3.9[204432]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:11:59 np0005580781 systemd[1]: Reloading.
Jan 10 12:11:59 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:11:59 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:11:59 np0005580781 systemd[1]: Starting libvirt logging daemon socket...
Jan 10 12:11:59 np0005580781 systemd[1]: Listening on libvirt logging daemon socket.
Jan 10 12:11:59 np0005580781 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 10 12:11:59 np0005580781 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 10 12:11:59 np0005580781 systemd[1]: Starting libvirt logging daemon...
Jan 10 12:11:59 np0005580781 systemd[1]: Started libvirt logging daemon.
Jan 10 12:12:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:00 np0005580781 python3.9[204624]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:12:00 np0005580781 systemd[1]: Reloading.
Jan 10 12:12:01 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:12:01 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:12:01 np0005580781 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 10 12:12:01 np0005580781 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 10 12:12:01 np0005580781 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 10 12:12:01 np0005580781 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 10 12:12:01 np0005580781 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 10 12:12:01 np0005580781 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 10 12:12:01 np0005580781 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 10 12:12:01 np0005580781 systemd[1]: Starting libvirt nodedev daemon...
Jan 10 12:12:01 np0005580781 systemd[1]: Started libvirt nodedev daemon.
Jan 10 12:12:01 np0005580781 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 10 12:12:01 np0005580781 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 10 12:12:01 np0005580781 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 10 12:12:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:02 np0005580781 python3.9[204848]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:12:02 np0005580781 systemd[1]: Reloading.
Jan 10 12:12:02 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:12:02 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:12:02 np0005580781 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 10 12:12:02 np0005580781 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 10 12:12:02 np0005580781 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 10 12:12:02 np0005580781 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 10 12:12:02 np0005580781 systemd[1]: Starting libvirt proxy daemon...
Jan 10 12:12:02 np0005580781 systemd[1]: Started libvirt proxy daemon.
Jan 10 12:12:02 np0005580781 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 57fdc539-06a9-4d4f-9887-6b4e9d44cade
Jan 10 12:12:02 np0005580781 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 10 12:12:02 np0005580781 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 57fdc539-06a9-4d4f-9887-6b4e9d44cade
Jan 10 12:12:02 np0005580781 setroubleshoot[204661]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 10 12:12:03 np0005580781 python3.9[205063]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:12:03 np0005580781 systemd[1]: Reloading.
Jan 10 12:12:03 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:12:03 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:12:03 np0005580781 systemd[1]: Listening on libvirt locking daemon socket.
Jan 10 12:12:03 np0005580781 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 10 12:12:03 np0005580781 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 10 12:12:03 np0005580781 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 10 12:12:03 np0005580781 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 10 12:12:03 np0005580781 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 10 12:12:03 np0005580781 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 10 12:12:03 np0005580781 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 10 12:12:03 np0005580781 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 10 12:12:03 np0005580781 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 10 12:12:03 np0005580781 systemd[1]: Starting libvirt QEMU daemon...
Jan 10 12:12:03 np0005580781 systemd[1]: Started libvirt QEMU daemon.
Jan 10 12:12:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:04 np0005580781 python3.9[205277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:12:04 np0005580781 systemd[1]: Reloading.
Jan 10 12:12:04 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:12:04 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:12:05 np0005580781 systemd[1]: Starting libvirt secret daemon socket...
Jan 10 12:12:05 np0005580781 systemd[1]: Listening on libvirt secret daemon socket.
Jan 10 12:12:05 np0005580781 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 10 12:12:05 np0005580781 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 10 12:12:05 np0005580781 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 10 12:12:05 np0005580781 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 10 12:12:05 np0005580781 systemd[1]: Starting libvirt secret daemon...
Jan 10 12:12:05 np0005580781 systemd[1]: Started libvirt secret daemon.
Jan 10 12:12:06 np0005580781 python3.9[205489]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:06 np0005580781 python3.9[205641]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 12:12:07 np0005580781 python3.9[205793]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:12:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:08 np0005580781 python3.9[205947]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 12:12:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:12:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:12:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:12:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:12:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:12:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:12:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:09 np0005580781 python3.9[206097]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:10 np0005580781 python3.9[206218]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065128.889809-1128-19812264224297/.source.xml follow=False _original_basename=secret.xml.j2 checksum=502388dc21d4b7fd5859feb0fdbea4c523b66fd1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:10 np0005580781 python3.9[206370]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:12:11 np0005580781 python3.9[206532]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:12 np0005580781 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 10 12:12:12 np0005580781 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.018s CPU time.
Jan 10 12:12:12 np0005580781 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 10 12:12:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:14 np0005580781 python3.9[206995]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:15 np0005580781 python3.9[207147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:15 np0005580781 python3.9[207270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065134.4450076-1183-255740683179064/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:16 np0005580781 python3.9[207422]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:17 np0005580781 podman[207546]: 2026-01-10 17:12:17.366135315 +0000 UTC m=+0.116894084 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 10 12:12:17 np0005580781 python3.9[207593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:18 np0005580781 python3.9[207678]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:18 np0005580781 python3.9[207830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:19 np0005580781 python3.9[207908]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2ezhikv2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:20 np0005580781 python3.9[208060]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:20 np0005580781 python3.9[208138]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:21 np0005580781 python3.9[208290]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:12:22 np0005580781 podman[208368]: 2026-01-10 17:12:22.082323981 +0000 UTC m=+0.076744532 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 10 12:12:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:22 np0005580781 python3[208462]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 10 12:12:23 np0005580781 python3.9[208614]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:24 np0005580781 python3.9[208692]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:25 np0005580781 python3.9[208844]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:25 np0005580781 python3.9[208922]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:26 np0005580781 python3.9[209074]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:26 np0005580781 python3.9[209152]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:27 np0005580781 python3.9[209304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:28 np0005580781 python3.9[209382]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:29 np0005580781 python3.9[209534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:29 np0005580781 python3.9[209659]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768065148.4302511-1308-14969805023374/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:30 np0005580781 python3.9[209811]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:32 np0005580781 python3.9[209963]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:12:33 np0005580781 python3.9[210118]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:34 np0005580781 python3.9[210270]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:12:35 np0005580781 python3.9[210423]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:12:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:36 np0005580781 python3.9[210577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:12:37 np0005580781 python3.9[210732]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:12:38
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes']
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:12:38 np0005580781 python3.9[210884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:38 np0005580781 python3.9[211007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065157.6038177-1380-122392488073948/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:12:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:12:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:12:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:39 np0005580781 python3.9[211159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:40 np0005580781 python3.9[211282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065159.1715386-1395-232336190912929/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:41 np0005580781 python3.9[211434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:12:41 np0005580781 python3.9[211557]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065160.7380064-1410-159791184435883/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:12:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:42 np0005580781 python3.9[211709]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:12:42 np0005580781 systemd[1]: Reloading.
Jan 10 12:12:43 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:12:43 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:12:43 np0005580781 systemd[1]: Reached target edpm_libvirt.target.
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:44 np0005580781 python3.9[211963]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 10 12:12:44 np0005580781 systemd[1]: Reloading.
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:12:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:12:44 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:12:44 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:12:44 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:12:44 np0005580781 systemd[1]: Reloading.
Jan 10 12:12:44 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:12:44 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:12:45 np0005580781 podman[212138]: 2026-01-10 17:12:45.395930205 +0000 UTC m=+0.072509747 container create b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:12:45 np0005580781 systemd[1]: Started libpod-conmon-b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0.scope.
Jan 10 12:12:45 np0005580781 podman[212138]: 2026-01-10 17:12:45.365656392 +0000 UTC m=+0.042235994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:12:45 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:12:45 np0005580781 podman[212138]: 2026-01-10 17:12:45.500778156 +0000 UTC m=+0.177357698 container init b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:12:45 np0005580781 podman[212138]: 2026-01-10 17:12:45.508274115 +0000 UTC m=+0.184853627 container start b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 12:12:45 np0005580781 podman[212138]: 2026-01-10 17:12:45.51152281 +0000 UTC m=+0.188102332 container attach b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:12:45 np0005580781 admiring_gauss[212154]: 167 167
Jan 10 12:12:45 np0005580781 systemd[1]: libpod-b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0.scope: Deactivated successfully.
Jan 10 12:12:45 np0005580781 podman[212138]: 2026-01-10 17:12:45.517820943 +0000 UTC m=+0.194400465 container died b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 10 12:12:45 np0005580781 systemd[1]: session-49.scope: Deactivated successfully.
Jan 10 12:12:45 np0005580781 systemd[1]: session-49.scope: Consumed 4min 1.728s CPU time.
Jan 10 12:12:45 np0005580781 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Jan 10 12:12:45 np0005580781 systemd-logind[798]: Removed session 49.
Jan 10 12:12:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6d4e90fb64ba93a3edc0a4dd8ecaac67c5687f7fbaea358802a8958e63059908-merged.mount: Deactivated successfully.
Jan 10 12:12:45 np0005580781 podman[212138]: 2026-01-10 17:12:45.56463108 +0000 UTC m=+0.241210582 container remove b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:12:45 np0005580781 systemd[1]: libpod-conmon-b8dc01457326a5fff68f8ebddd0b7f98d1aa1daaa1f53c4100540e9a66cc37e0.scope: Deactivated successfully.
Jan 10 12:12:45 np0005580781 podman[212177]: 2026-01-10 17:12:45.758345535 +0000 UTC m=+0.046779577 container create 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 12:12:45 np0005580781 systemd[1]: Started libpod-conmon-3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79.scope.
Jan 10 12:12:45 np0005580781 podman[212177]: 2026-01-10 17:12:45.740251806 +0000 UTC m=+0.028685828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:12:45 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:12:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:45 np0005580781 podman[212177]: 2026-01-10 17:12:45.879823791 +0000 UTC m=+0.168257833 container init 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 12:12:45 np0005580781 podman[212177]: 2026-01-10 17:12:45.895189469 +0000 UTC m=+0.183623511 container start 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 12:12:45 np0005580781 podman[212177]: 2026-01-10 17:12:45.926499173 +0000 UTC m=+0.214933215 container attach 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:12:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:46 np0005580781 musing_galileo[212193]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:12:46 np0005580781 musing_galileo[212193]: --> All data devices are unavailable
Jan 10 12:12:46 np0005580781 systemd[1]: libpod-3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79.scope: Deactivated successfully.
Jan 10 12:12:46 np0005580781 podman[212177]: 2026-01-10 17:12:46.524680635 +0000 UTC m=+0.813114657 container died 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 12:12:46 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9ba235553abba65d6b01dc9960ee3801569ce9cae3fea5025b362272669058d2-merged.mount: Deactivated successfully.
Jan 10 12:12:46 np0005580781 podman[212177]: 2026-01-10 17:12:46.583554904 +0000 UTC m=+0.871988926 container remove 3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_galileo, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:12:46 np0005580781 systemd[1]: libpod-conmon-3077141643eb467a83381ce38e6809e2a96c120e7094d590dd9ed40fea490f79.scope: Deactivated successfully.
Jan 10 12:12:47 np0005580781 podman[212286]: 2026-01-10 17:12:47.102003438 +0000 UTC m=+0.061573589 container create cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 12:12:47 np0005580781 systemd[1]: Started libpod-conmon-cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb.scope.
Jan 10 12:12:47 np0005580781 podman[212286]: 2026-01-10 17:12:47.075307609 +0000 UTC m=+0.034877860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:12:47 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:12:47 np0005580781 podman[212286]: 2026-01-10 17:12:47.199981348 +0000 UTC m=+0.159551569 container init cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 12:12:47 np0005580781 podman[212286]: 2026-01-10 17:12:47.212123742 +0000 UTC m=+0.171693923 container start cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 12:12:47 np0005580781 podman[212286]: 2026-01-10 17:12:47.216428748 +0000 UTC m=+0.175998939 container attach cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:12:47 np0005580781 musing_bhaskara[212302]: 167 167
Jan 10 12:12:47 np0005580781 systemd[1]: libpod-cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb.scope: Deactivated successfully.
Jan 10 12:12:47 np0005580781 podman[212286]: 2026-01-10 17:12:47.22163667 +0000 UTC m=+0.181206861 container died cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 12:12:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay-303261a2b1ed1a789606cf9803987ec944758b267dbbfc9e5d15096fc3a4f641-merged.mount: Deactivated successfully.
Jan 10 12:12:47 np0005580781 podman[212286]: 2026-01-10 17:12:47.274465092 +0000 UTC m=+0.234035283 container remove cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:12:47 np0005580781 systemd[1]: libpod-conmon-cb4ad84e37f9da5ce56bfa4e40c83ca11f865608a5c64cc8514fa7350b5354cb.scope: Deactivated successfully.
Jan 10 12:12:47 np0005580781 podman[212326]: 2026-01-10 17:12:47.511518572 +0000 UTC m=+0.066734499 container create e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 12:12:47 np0005580781 systemd[1]: Started libpod-conmon-e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9.scope.
Jan 10 12:12:47 np0005580781 podman[212326]: 2026-01-10 17:12:47.482483915 +0000 UTC m=+0.037699892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:12:47 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:12:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:47 np0005580781 podman[212326]: 2026-01-10 17:12:47.605068953 +0000 UTC m=+0.160284880 container init e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:12:47 np0005580781 podman[212326]: 2026-01-10 17:12:47.616806956 +0000 UTC m=+0.172022873 container start e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 12:12:47 np0005580781 podman[212326]: 2026-01-10 17:12:47.621379999 +0000 UTC m=+0.176595926 container attach e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 12:12:47 np0005580781 podman[212340]: 2026-01-10 17:12:47.726143557 +0000 UTC m=+0.159565099 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]: {
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:    "0": [
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:        {
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "devices": [
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "/dev/loop3"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            ],
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_name": "ceph_lv0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_size": "21470642176",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "name": "ceph_lv0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "tags": {
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cluster_name": "ceph",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.crush_device_class": "",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.encrypted": "0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.objectstore": "bluestore",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osd_id": "0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.type": "block",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.vdo": "0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.with_tpm": "0"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            },
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "type": "block",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "vg_name": "ceph_vg0"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:        }
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:    ],
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:    "1": [
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:        {
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "devices": [
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "/dev/loop4"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            ],
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_name": "ceph_lv1",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_size": "21470642176",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "name": "ceph_lv1",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "tags": {
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cluster_name": "ceph",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.crush_device_class": "",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.encrypted": "0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.objectstore": "bluestore",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osd_id": "1",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.type": "block",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.vdo": "0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.with_tpm": "0"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            },
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "type": "block",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "vg_name": "ceph_vg1"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:        }
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:    ],
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:    "2": [
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:        {
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "devices": [
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "/dev/loop5"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            ],
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_name": "ceph_lv2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_size": "21470642176",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "name": "ceph_lv2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "tags": {
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.cluster_name": "ceph",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.crush_device_class": "",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.encrypted": "0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.objectstore": "bluestore",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osd_id": "2",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.type": "block",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.vdo": "0",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:                "ceph.with_tpm": "0"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            },
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "type": "block",
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:            "vg_name": "ceph_vg2"
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:        }
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]:    ]
Jan 10 12:12:47 np0005580781 dreamy_northcutt[212343]: }
Jan 10 12:12:47 np0005580781 systemd[1]: libpod-e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9.scope: Deactivated successfully.
Jan 10 12:12:47 np0005580781 podman[212326]: 2026-01-10 17:12:47.993369878 +0000 UTC m=+0.548585775 container died e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:12:48 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1478a8573712360fc3106c04254850da04d7f31f3216dee0f2c492be58eac0b0-merged.mount: Deactivated successfully.
Jan 10 12:12:48 np0005580781 podman[212326]: 2026-01-10 17:12:48.047689264 +0000 UTC m=+0.602905161 container remove e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:12:48 np0005580781 systemd[1]: libpod-conmon-e3851d1ae7d9fa80095fe142fdcb27fd4cfa5c296f93fd9d58bc1b9b783757c9.scope: Deactivated successfully.
Jan 10 12:12:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:48 np0005580781 podman[212451]: 2026-01-10 17:12:48.692333351 +0000 UTC m=+0.065292347 container create 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 12:12:48 np0005580781 systemd[1]: Started libpod-conmon-36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32.scope.
Jan 10 12:12:48 np0005580781 podman[212451]: 2026-01-10 17:12:48.671561824 +0000 UTC m=+0.044520820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:12:48 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:12:48 np0005580781 podman[212451]: 2026-01-10 17:12:48.788765006 +0000 UTC m=+0.161723982 container init 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 12:12:48 np0005580781 podman[212451]: 2026-01-10 17:12:48.798769408 +0000 UTC m=+0.171728364 container start 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:12:48 np0005580781 podman[212451]: 2026-01-10 17:12:48.802778365 +0000 UTC m=+0.175737331 container attach 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:12:48 np0005580781 elastic_golick[212467]: 167 167
Jan 10 12:12:48 np0005580781 systemd[1]: libpod-36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32.scope: Deactivated successfully.
Jan 10 12:12:48 np0005580781 podman[212451]: 2026-01-10 17:12:48.806163853 +0000 UTC m=+0.179122819 container died 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 10 12:12:48 np0005580781 systemd[1]: var-lib-containers-storage-overlay-e2d182955f174c666d2b585a58b21850ca49c98bdd51786e84bacf2ee8416379-merged.mount: Deactivated successfully.
Jan 10 12:12:48 np0005580781 podman[212451]: 2026-01-10 17:12:48.853192736 +0000 UTC m=+0.226151702 container remove 36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:12:48 np0005580781 systemd[1]: libpod-conmon-36b85e87dfe0c7b49a7aa686e246495e5d78df3e4c84f3ab5af66e4a1fa7bc32.scope: Deactivated successfully.
Jan 10 12:12:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:12:48.913 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:12:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:12:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:12:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:12:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:12:49 np0005580781 podman[212489]: 2026-01-10 17:12:49.095744717 +0000 UTC m=+0.071637292 container create 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 10 12:12:49 np0005580781 systemd[1]: Started libpod-conmon-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope.
Jan 10 12:12:49 np0005580781 podman[212489]: 2026-01-10 17:12:49.063130265 +0000 UTC m=+0.039022900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:12:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:49 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:12:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:49 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:12:49 np0005580781 podman[212489]: 2026-01-10 17:12:49.196110927 +0000 UTC m=+0.172003542 container init 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:12:49 np0005580781 podman[212489]: 2026-01-10 17:12:49.210171827 +0000 UTC m=+0.186064412 container start 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:12:49 np0005580781 podman[212489]: 2026-01-10 17:12:49.214649448 +0000 UTC m=+0.190542033 container attach 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:12:49 np0005580781 lvm[212584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:12:49 np0005580781 lvm[212584]: VG ceph_vg0 finished
Jan 10 12:12:49 np0005580781 lvm[212585]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:12:49 np0005580781 lvm[212585]: VG ceph_vg1 finished
Jan 10 12:12:50 np0005580781 lvm[212587]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:12:50 np0005580781 lvm[212587]: VG ceph_vg2 finished
Jan 10 12:12:50 np0005580781 reverent_mestorf[212506]: {}
Jan 10 12:12:50 np0005580781 systemd[1]: libpod-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope: Deactivated successfully.
Jan 10 12:12:50 np0005580781 podman[212489]: 2026-01-10 17:12:50.112563609 +0000 UTC m=+1.088456174 container died 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 12:12:50 np0005580781 systemd[1]: libpod-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope: Consumed 1.464s CPU time.
Jan 10 12:12:50 np0005580781 systemd[1]: var-lib-containers-storage-overlay-309692ec14f9a03c3f66831fc824aef267ee7f0069c07a665921ed6f7d04632e-merged.mount: Deactivated successfully.
Jan 10 12:12:50 np0005580781 podman[212489]: 2026-01-10 17:12:50.175741383 +0000 UTC m=+1.151633958 container remove 02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 12:12:50 np0005580781 systemd[1]: libpod-conmon-02e1f58eb109da1cce1e99785b60ca5d3cc9312e0723bad0cf5e44c1b7024dfe.scope: Deactivated successfully.
Jan 10 12:12:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:12:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:12:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:12:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:12:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:51 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:12:51 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:12:51 np0005580781 systemd-logind[798]: New session 50 of user zuul.
Jan 10 12:12:51 np0005580781 systemd[1]: Started Session 50 of User zuul.
Jan 10 12:12:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:52 np0005580781 python3.9[212780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:12:53 np0005580781 podman[212809]: 2026-01-10 17:12:53.075031446 +0000 UTC m=+0.072694753 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 10 12:12:53 np0005580781 python3.9[212953]: ansible-ansible.builtin.service_facts Invoked
Jan 10 12:12:53 np0005580781 network[212970]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 12:12:53 np0005580781 network[212971]: 'network-scripts' will be removed from distribution in near future.
Jan 10 12:12:53 np0005580781 network[212972]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 12:12:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:12:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:12:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:00 np0005580781 python3.9[213244]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 10 12:13:02 np0005580781 python3.9[213328]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:13:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.485246) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187485468, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 2363109, "memory_usage": 2405216, "flush_reason": "Manual Compaction"}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187503618, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2291267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9118, "largest_seqno": 11159, "table_properties": {"data_size": 2282087, "index_size": 5802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17876, "raw_average_key_size": 19, "raw_value_size": 2263717, "raw_average_value_size": 2465, "num_data_blocks": 267, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064954, "oldest_key_time": 1768064954, "file_creation_time": 1768065187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 18461 microseconds, and 6705 cpu microseconds.
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.503756) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2291267 bytes OK
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.503803) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.505981) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.506002) EVENT_LOG_v1 {"time_micros": 1768065187505999, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.506028) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2354588, prev total WAL file size 2354588, number of live WAL files 2.
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.507270) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2237KB)], [26(4823KB)]
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187507550, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 7230724, "oldest_snapshot_seqno": -1}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3244 keys, 6088494 bytes, temperature: kUnknown
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187575772, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 6088494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6062275, "index_size": 17021, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 74895, "raw_average_key_size": 23, "raw_value_size": 5999676, "raw_average_value_size": 1849, "num_data_blocks": 751, "num_entries": 3244, "num_filter_entries": 3244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.576281) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 6088494 bytes
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.578193) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.8 rd, 89.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.7 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 3758, records dropped: 514 output_compression: NoCompression
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.578248) EVENT_LOG_v1 {"time_micros": 1768065187578232, "job": 10, "event": "compaction_finished", "compaction_time_micros": 68368, "compaction_time_cpu_micros": 40696, "output_level": 6, "num_output_files": 1, "total_output_size": 6088494, "num_input_records": 3758, "num_output_records": 3244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187579265, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065187581168, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.506877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:13:07 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:13:07.581298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:13:08 np0005580781 python3.9[213481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:13:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:13:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:13:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:13:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:13:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:13:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:13:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:09 np0005580781 python3.9[213633]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:13:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:10 np0005580781 python3.9[213786]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:13:11 np0005580781 python3.9[213938]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:13:11 np0005580781 python3.9[214091]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:13:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:12 np0005580781 python3.9[214214]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065191.4991152-90-157725941526714/.source.iscsi _original_basename=.67edewgc follow=False checksum=20aa512fad3df14aa1fa2c6777f3d96658f0cf72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:13 np0005580781 python3.9[214366]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:14 np0005580781 python3.9[214518]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:16 np0005580781 python3.9[214670]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:13:16 np0005580781 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 10 12:13:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:17 np0005580781 python3.9[214826]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:13:18 np0005580781 podman[214829]: 2026-01-10 17:13:18.163293397 +0000 UTC m=+0.150320709 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 10 12:13:18 np0005580781 systemd[1]: Reloading.
Jan 10 12:13:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:18 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:13:18 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:13:18 np0005580781 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 10 12:13:18 np0005580781 systemd[1]: Starting Open-iSCSI...
Jan 10 12:13:18 np0005580781 kernel: Loading iSCSI transport class v2.0-870.
Jan 10 12:13:18 np0005580781 systemd[1]: Started Open-iSCSI.
Jan 10 12:13:18 np0005580781 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 10 12:13:18 np0005580781 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 10 12:13:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:19 np0005580781 python3.9[215055]: ansible-ansible.builtin.service_facts Invoked
Jan 10 12:13:19 np0005580781 network[215072]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 12:13:19 np0005580781 network[215073]: 'network-scripts' will be removed from distribution in near future.
Jan 10 12:13:19 np0005580781 network[215074]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 12:13:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:23 np0005580781 podman[215165]: 2026-01-10 17:13:23.282032139 +0000 UTC m=+0.115956366 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 12:13:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:25 np0005580781 python3.9[215366]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:13:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:27 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 12:13:27 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 12:13:27 np0005580781 systemd[1]: Reloading.
Jan 10 12:13:27 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:13:27 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:13:28 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 12:13:28 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 12:13:28 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 12:13:28 np0005580781 systemd[1]: run-r17f0a6a0424845bbadf0dbee14a7a581.service: Deactivated successfully.
Jan 10 12:13:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:29 np0005580781 python3.9[215684]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 10 12:13:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:30 np0005580781 python3.9[215836]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 10 12:13:31 np0005580781 python3.9[215992]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:13:31 np0005580781 python3.9[216115]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065210.752674-178-189788000314792/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:32 np0005580781 python3.9[216267]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:34 np0005580781 python3.9[216419]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:13:34 np0005580781 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 10 12:13:34 np0005580781 systemd[1]: Stopped Load Kernel Modules.
Jan 10 12:13:34 np0005580781 systemd[1]: Stopping Load Kernel Modules...
Jan 10 12:13:34 np0005580781 systemd[1]: Starting Load Kernel Modules...
Jan 10 12:13:34 np0005580781 systemd[1]: Finished Load Kernel Modules.
Jan 10 12:13:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:34 np0005580781 python3.9[216575]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:13:35 np0005580781 python3.9[216728]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:13:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:36 np0005580781 python3.9[216880]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:13:37 np0005580781 python3.9[217003]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065216.1849818-229-181869298825951/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:13:38
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'cephfs.cephfs.data', 'volumes']
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:13:38 np0005580781 python3.9[217155]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:13:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:13:39 np0005580781 python3.9[217308]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:13:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:13:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:40 np0005580781 python3.9[217460]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:40 np0005580781 python3.9[217612]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:41 np0005580781 python3.9[217764]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:42 np0005580781 python3.9[217916]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:43 np0005580781 python3.9[218068]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:44 np0005580781 python3.9[218220]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:13:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:13:44 np0005580781 python3.9[218372]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:13:45 np0005580781 python3.9[218526]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:13:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:46 np0005580781 python3.9[218679]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:13:46 np0005580781 systemd[1]: Listening on multipathd control socket.
Jan 10 12:13:47 np0005580781 python3.9[218835]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:13:47 np0005580781 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 10 12:13:47 np0005580781 udevadm[218840]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 10 12:13:47 np0005580781 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 10 12:13:47 np0005580781 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 10 12:13:47 np0005580781 multipathd[218844]: --------start up--------
Jan 10 12:13:47 np0005580781 multipathd[218844]: read /etc/multipath.conf
Jan 10 12:13:47 np0005580781 multipathd[218844]: path checkers start up
Jan 10 12:13:47 np0005580781 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 10 12:13:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:48 np0005580781 podman[218975]: 2026-01-10 17:13:48.809072546 +0000 UTC m=+0.159638500 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 10 12:13:48 np0005580781 python3.9[219015]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 10 12:13:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:13:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:13:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:13:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:13:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:13:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:13:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:49 np0005580781 python3.9[219180]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 10 12:13:49 np0005580781 kernel: Key type psk registered
Jan 10 12:13:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:50 np0005580781 python3.9[219393]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:13:51 np0005580781 python3.9[219570]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768065230.183878-359-273205891564077/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:13:51 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:13:51 np0005580781 podman[219617]: 2026-01-10 17:13:51.462405792 +0000 UTC m=+0.034643714 container create 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:13:51 np0005580781 systemd[1]: Started libpod-conmon-56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29.scope.
Jan 10 12:13:51 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:13:51 np0005580781 podman[219617]: 2026-01-10 17:13:51.448528787 +0000 UTC m=+0.020766739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:13:51 np0005580781 podman[219617]: 2026-01-10 17:13:51.558604185 +0000 UTC m=+0.130842207 container init 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:13:51 np0005580781 podman[219617]: 2026-01-10 17:13:51.573857912 +0000 UTC m=+0.146095884 container start 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 12:13:51 np0005580781 naughty_driscoll[219651]: 167 167
Jan 10 12:13:51 np0005580781 podman[219617]: 2026-01-10 17:13:51.579097175 +0000 UTC m=+0.151335187 container attach 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 12:13:51 np0005580781 systemd[1]: libpod-56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29.scope: Deactivated successfully.
Jan 10 12:13:51 np0005580781 podman[219617]: 2026-01-10 17:13:51.58063637 +0000 UTC m=+0.152874322 container died 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 12:13:51 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9bba9fe20a4dcb21d7ca30235f18fbd4588ab3fbb26cbaa9309d5757a781a69d-merged.mount: Deactivated successfully.
Jan 10 12:13:51 np0005580781 podman[219617]: 2026-01-10 17:13:51.631456566 +0000 UTC m=+0.203694498 container remove 56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:13:51 np0005580781 systemd[1]: libpod-conmon-56266b49be06f17505d0e6177ac00ce49f7c72cc06033bacbe0f752fce565a29.scope: Deactivated successfully.
Jan 10 12:13:51 np0005580781 podman[219751]: 2026-01-10 17:13:51.851767568 +0000 UTC m=+0.051515897 container create 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:13:51 np0005580781 systemd[1]: Started libpod-conmon-63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa.scope.
Jan 10 12:13:51 np0005580781 podman[219751]: 2026-01-10 17:13:51.826406066 +0000 UTC m=+0.026154405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:13:51 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:13:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:51 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:51 np0005580781 podman[219751]: 2026-01-10 17:13:51.949433404 +0000 UTC m=+0.149181763 container init 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 10 12:13:51 np0005580781 podman[219751]: 2026-01-10 17:13:51.961323752 +0000 UTC m=+0.161072091 container start 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:13:51 np0005580781 podman[219751]: 2026-01-10 17:13:51.964832054 +0000 UTC m=+0.164580433 container attach 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:13:52 np0005580781 python3.9[219822]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:13:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:52 np0005580781 gracious_sanderson[219814]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:13:52 np0005580781 gracious_sanderson[219814]: --> All data devices are unavailable
Jan 10 12:13:52 np0005580781 systemd[1]: libpod-63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa.scope: Deactivated successfully.
Jan 10 12:13:52 np0005580781 podman[219751]: 2026-01-10 17:13:52.595570057 +0000 UTC m=+0.795318416 container died 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 12:13:52 np0005580781 systemd[1]: var-lib-containers-storage-overlay-000ec63c38fd57934754736dcd379bf237485491a86cf30686e52708ecd86d33-merged.mount: Deactivated successfully.
Jan 10 12:13:52 np0005580781 podman[219751]: 2026-01-10 17:13:52.653973295 +0000 UTC m=+0.853721624 container remove 63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_sanderson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 12:13:52 np0005580781 systemd[1]: libpod-conmon-63501d05b0489bd91f96a964a3e0188f679ecc8310729e375131fd7599b2bcfa.scope: Deactivated successfully.
Jan 10 12:13:52 np0005580781 python3.9[220003]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:13:53 np0005580781 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 10 12:13:53 np0005580781 systemd[1]: Stopped Load Kernel Modules.
Jan 10 12:13:53 np0005580781 systemd[1]: Stopping Load Kernel Modules...
Jan 10 12:13:53 np0005580781 systemd[1]: Starting Load Kernel Modules...
Jan 10 12:13:53 np0005580781 systemd[1]: Finished Load Kernel Modules.
Jan 10 12:13:53 np0005580781 podman[220096]: 2026-01-10 17:13:53.311340077 +0000 UTC m=+0.059884132 container create 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:13:53 np0005580781 systemd[1]: Started libpod-conmon-289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a.scope.
Jan 10 12:13:53 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:13:53 np0005580781 podman[220096]: 2026-01-10 17:13:53.287895082 +0000 UTC m=+0.036439177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:13:53 np0005580781 podman[220096]: 2026-01-10 17:13:53.397729504 +0000 UTC m=+0.146273599 container init 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 12:13:53 np0005580781 podman[220096]: 2026-01-10 17:13:53.40614545 +0000 UTC m=+0.154689505 container start 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 10 12:13:53 np0005580781 podman[220096]: 2026-01-10 17:13:53.411066984 +0000 UTC m=+0.159611069 container attach 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:13:53 np0005580781 pensive_hofstadter[220134]: 167 167
Jan 10 12:13:53 np0005580781 podman[220096]: 2026-01-10 17:13:53.413209526 +0000 UTC m=+0.161753581 container died 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:13:53 np0005580781 systemd[1]: libpod-289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a.scope: Deactivated successfully.
Jan 10 12:13:53 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1cdb004859ed21ea40f84001c6076fa79aed439ff16fe6d0685d6b4d32457db1-merged.mount: Deactivated successfully.
Jan 10 12:13:53 np0005580781 podman[220096]: 2026-01-10 17:13:53.466088232 +0000 UTC m=+0.214632287 container remove 289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 10 12:13:53 np0005580781 systemd[1]: libpod-conmon-289e4fa6b8df4712d2a0c8a2d08aa43c780591bd2800b807b0a7f6801e414a7a.scope: Deactivated successfully.
Jan 10 12:13:53 np0005580781 podman[220118]: 2026-01-10 17:13:53.478969269 +0000 UTC m=+0.116864088 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 12:13:53 np0005580781 podman[220255]: 2026-01-10 17:13:53.714424604 +0000 UTC m=+0.076455076 container create 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 12:13:53 np0005580781 podman[220255]: 2026-01-10 17:13:53.685786207 +0000 UTC m=+0.047816749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:13:53 np0005580781 systemd[1]: Started libpod-conmon-1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699.scope.
Jan 10 12:13:53 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:13:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:53 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:53 np0005580781 podman[220255]: 2026-01-10 17:13:53.868484329 +0000 UTC m=+0.230514881 container init 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 12:13:53 np0005580781 podman[220255]: 2026-01-10 17:13:53.883126517 +0000 UTC m=+0.245157009 container start 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:13:53 np0005580781 podman[220255]: 2026-01-10 17:13:53.887463494 +0000 UTC m=+0.249493996 container attach 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 12:13:54 np0005580781 python3.9[220297]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]: {
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:    "0": [
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:        {
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "devices": [
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "/dev/loop3"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            ],
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_name": "ceph_lv0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_size": "21470642176",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "name": "ceph_lv0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "tags": {
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cluster_name": "ceph",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.crush_device_class": "",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.encrypted": "0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.objectstore": "bluestore",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osd_id": "0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.type": "block",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.vdo": "0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.with_tpm": "0"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            },
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "type": "block",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "vg_name": "ceph_vg0"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:        }
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:    ],
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:    "1": [
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:        {
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "devices": [
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "/dev/loop4"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            ],
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_name": "ceph_lv1",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_size": "21470642176",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "name": "ceph_lv1",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "tags": {
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cluster_name": "ceph",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.crush_device_class": "",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.encrypted": "0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.objectstore": "bluestore",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osd_id": "1",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.type": "block",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.vdo": "0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.with_tpm": "0"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            },
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "type": "block",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "vg_name": "ceph_vg1"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:        }
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:    ],
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:    "2": [
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:        {
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "devices": [
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "/dev/loop5"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            ],
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_name": "ceph_lv2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_size": "21470642176",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "name": "ceph_lv2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "tags": {
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.cluster_name": "ceph",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.crush_device_class": "",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.encrypted": "0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.objectstore": "bluestore",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osd_id": "2",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.type": "block",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.vdo": "0",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:                "ceph.with_tpm": "0"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            },
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "type": "block",
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:            "vg_name": "ceph_vg2"
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:        }
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]:    ]
Jan 10 12:13:54 np0005580781 agitated_lalande[220300]: }
Jan 10 12:13:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:54 np0005580781 systemd[1]: libpod-1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699.scope: Deactivated successfully.
Jan 10 12:13:54 np0005580781 podman[220255]: 2026-01-10 17:13:54.223086038 +0000 UTC m=+0.585116560 container died 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 12:13:54 np0005580781 systemd[1]: var-lib-containers-storage-overlay-993d574818403955e598a357c1237f29a4157da428c1810d924ee5848703306f-merged.mount: Deactivated successfully.
Jan 10 12:13:54 np0005580781 podman[220255]: 2026-01-10 17:13:54.278740156 +0000 UTC m=+0.640770628 container remove 1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:13:54 np0005580781 systemd[1]: libpod-conmon-1a07f167c00aeb93e6661cb63022208ef0a677d84af6b0356f14a824c1d88699.scope: Deactivated successfully.
Jan 10 12:13:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:54 np0005580781 podman[220385]: 2026-01-10 17:13:54.847206819 +0000 UTC m=+0.057746910 container create a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:13:54 np0005580781 systemd[1]: Started libpod-conmon-a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7.scope.
Jan 10 12:13:54 np0005580781 podman[220385]: 2026-01-10 17:13:54.816566203 +0000 UTC m=+0.027106364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:13:54 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:13:54 np0005580781 podman[220385]: 2026-01-10 17:13:54.953861637 +0000 UTC m=+0.164401788 container init a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:13:54 np0005580781 podman[220385]: 2026-01-10 17:13:54.964774097 +0000 UTC m=+0.175314168 container start a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 12:13:54 np0005580781 podman[220385]: 2026-01-10 17:13:54.969940538 +0000 UTC m=+0.180480699 container attach a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 12:13:54 np0005580781 objective_swartz[220401]: 167 167
Jan 10 12:13:54 np0005580781 systemd[1]: libpod-a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7.scope: Deactivated successfully.
Jan 10 12:13:54 np0005580781 podman[220385]: 2026-01-10 17:13:54.971209985 +0000 UTC m=+0.181750056 container died a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:13:55 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ceefce939f86b21b838a1d169ded7351918188b8f57c7fd2ab91f54fe905530b-merged.mount: Deactivated successfully.
Jan 10 12:13:55 np0005580781 podman[220385]: 2026-01-10 17:13:55.020906378 +0000 UTC m=+0.231446459 container remove a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:13:55 np0005580781 systemd[1]: libpod-conmon-a02cd41a6d53b47e73222c65183a113cbf57c1e631a907a783927297acd899a7.scope: Deactivated successfully.
Jan 10 12:13:55 np0005580781 podman[220425]: 2026-01-10 17:13:55.221813013 +0000 UTC m=+0.063409525 container create 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:13:55 np0005580781 systemd[1]: Started libpod-conmon-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope.
Jan 10 12:13:55 np0005580781 podman[220425]: 2026-01-10 17:13:55.190643991 +0000 UTC m=+0.032240553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:13:55 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:13:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:55 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:13:55 np0005580781 podman[220425]: 2026-01-10 17:13:55.348177248 +0000 UTC m=+0.189773820 container init 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:13:55 np0005580781 podman[220425]: 2026-01-10 17:13:55.361172558 +0000 UTC m=+0.202769070 container start 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:13:55 np0005580781 podman[220425]: 2026-01-10 17:13:55.365298359 +0000 UTC m=+0.206894871 container attach 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:13:56 np0005580781 lvm[220522]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:13:56 np0005580781 lvm[220522]: VG ceph_vg0 finished
Jan 10 12:13:56 np0005580781 lvm[220524]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:13:56 np0005580781 lvm[220524]: VG ceph_vg1 finished
Jan 10 12:13:56 np0005580781 lvm[220526]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:13:56 np0005580781 lvm[220526]: VG ceph_vg2 finished
Jan 10 12:13:56 np0005580781 lvm[220527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:13:56 np0005580781 lvm[220527]: VG ceph_vg0 finished
Jan 10 12:13:56 np0005580781 eloquent_hawking[220442]: {}
Jan 10 12:13:56 np0005580781 systemd[1]: libpod-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope: Deactivated successfully.
Jan 10 12:13:56 np0005580781 systemd[1]: libpod-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope: Consumed 1.527s CPU time.
Jan 10 12:13:56 np0005580781 podman[220425]: 2026-01-10 17:13:56.277981646 +0000 UTC m=+1.119578158 container died 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 10 12:13:56 np0005580781 systemd[1]: var-lib-containers-storage-overlay-07229d3983aedce6bc8ea32bf3ba5fb3bdb73c647ee6f5d42cf99ce7e0399ae5-merged.mount: Deactivated successfully.
Jan 10 12:13:56 np0005580781 podman[220425]: 2026-01-10 17:13:56.337454785 +0000 UTC m=+1.179051297 container remove 05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 10 12:13:56 np0005580781 systemd[1]: libpod-conmon-05bad29c3c4072bc14bb6037393bab7dde20bee53bfa66688d36c5acbb04424d.scope: Deactivated successfully.
Jan 10 12:13:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:13:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:13:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:13:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:13:56 np0005580781 systemd[1]: Reloading.
Jan 10 12:13:56 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:13:56 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:13:56 np0005580781 systemd[1]: Reloading.
Jan 10 12:13:56 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:13:56 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:13:57 np0005580781 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 10 12:13:57 np0005580781 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 10 12:13:57 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:13:57 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:13:57 np0005580781 lvm[220679]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:13:57 np0005580781 lvm[220679]: VG ceph_vg0 finished
Jan 10 12:13:57 np0005580781 lvm[220677]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:13:57 np0005580781 lvm[220677]: VG ceph_vg1 finished
Jan 10 12:13:57 np0005580781 lvm[220673]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:13:57 np0005580781 lvm[220673]: VG ceph_vg2 finished
Jan 10 12:13:57 np0005580781 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 10 12:13:57 np0005580781 systemd[1]: Starting man-db-cache-update.service...
Jan 10 12:13:57 np0005580781 systemd[1]: Reloading.
Jan 10 12:13:57 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:13:57 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:13:58 np0005580781 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 10 12:13:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:13:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:13:59 np0005580781 python3.9[221894]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:13:59 np0005580781 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 10 12:13:59 np0005580781 systemd[1]: Finished man-db-cache-update.service.
Jan 10 12:13:59 np0005580781 systemd[1]: man-db-cache-update.service: Consumed 2.070s CPU time.
Jan 10 12:13:59 np0005580781 systemd[1]: run-rdde1c10fe66f48e89d9a73cd74f301f6.service: Deactivated successfully.
Jan 10 12:13:59 np0005580781 systemd[1]: Stopping Open-iSCSI...
Jan 10 12:13:59 np0005580781 iscsid[214895]: iscsid shutting down.
Jan 10 12:13:59 np0005580781 systemd[1]: iscsid.service: Deactivated successfully.
Jan 10 12:13:59 np0005580781 systemd[1]: Stopped Open-iSCSI.
Jan 10 12:13:59 np0005580781 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 10 12:13:59 np0005580781 systemd[1]: Starting Open-iSCSI...
Jan 10 12:13:59 np0005580781 systemd[1]: Started Open-iSCSI.
Jan 10 12:14:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:00 np0005580781 python3.9[222188]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:14:00 np0005580781 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 10 12:14:00 np0005580781 multipathd[218844]: exit (signal)
Jan 10 12:14:00 np0005580781 multipathd[218844]: --------shut down-------
Jan 10 12:14:00 np0005580781 systemd[1]: multipathd.service: Deactivated successfully.
Jan 10 12:14:00 np0005580781 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 10 12:14:00 np0005580781 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 10 12:14:00 np0005580781 multipathd[222194]: --------start up--------
Jan 10 12:14:00 np0005580781 multipathd[222194]: read /etc/multipath.conf
Jan 10 12:14:00 np0005580781 multipathd[222194]: path checkers start up
Jan 10 12:14:00 np0005580781 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 10 12:14:01 np0005580781 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 10 12:14:01 np0005580781 python3.9[222351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 10 12:14:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:02 np0005580781 python3.9[222508]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:02 np0005580781 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 10 12:14:03 np0005580781 python3.9[222661]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 12:14:03 np0005580781 systemd[1]: Reloading.
Jan 10 12:14:03 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:14:03 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:14:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:04 np0005580781 python3.9[222846]: ansible-ansible.builtin.service_facts Invoked
Jan 10 12:14:05 np0005580781 network[222863]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 10 12:14:05 np0005580781 network[222864]: 'network-scripts' will be removed from distribution in near future.
Jan 10 12:14:05 np0005580781 network[222865]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 10 12:14:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:14:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:14:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:14:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:14:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:14:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:14:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:09 np0005580781 python3.9[223138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:10 np0005580781 python3.9[223291]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:11 np0005580781 python3.9[223444]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:12 np0005580781 python3.9[223597]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:13 np0005580781 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 10 12:14:13 np0005580781 python3.9[223750]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:13 np0005580781 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 10 12:14:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:14 np0005580781 python3.9[223905]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:15 np0005580781 python3.9[224058]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:16 np0005580781 python3.9[224211]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:14:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:17 np0005580781 python3.9[224364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:17 np0005580781 python3.9[224516]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:18 np0005580781 python3.9[224668]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:19 np0005580781 podman[224729]: 2026-01-10 17:14:19.140338164 +0000 UTC m=+0.127448798 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 10 12:14:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:19 np0005580781 python3.9[224844]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:20 np0005580781 python3.9[224996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:20 np0005580781 python3.9[225148]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:21 np0005580781 python3.9[225300]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:22 np0005580781 python3.9[225452]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:23 np0005580781 python3.9[225604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:23 np0005580781 podman[225728]: 2026-01-10 17:14:23.845298654 +0000 UTC m=+0.081317019 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 12:14:24 np0005580781 python3.9[225777]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:24 np0005580781 python3.9[225929]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:25 np0005580781 python3.9[226081]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:26 np0005580781 python3.9[226233]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:27 np0005580781 python3.9[226385]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:27 np0005580781 python3.9[226537]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:28 np0005580781 python3.9[226689]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:14:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:29 np0005580781 python3.9[226841]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:30 np0005580781 python3.9[226993]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 10 12:14:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:31 np0005580781 python3.9[227145]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 12:14:31 np0005580781 systemd[1]: Reloading.
Jan 10 12:14:31 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:14:31 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:14:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:32 np0005580781 python3.9[227333]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:34 np0005580781 python3.9[227486]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:35 np0005580781 python3.9[227639]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:35 np0005580781 python3.9[227792]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:36 np0005580781 python3.9[227945]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:37 np0005580781 python3.9[228098]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:14:38
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', '.mgr', 'vms']
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:14:38 np0005580781 python3.9[228251]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:14:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:14:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:14:39 np0005580781 python3.9[228404]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 10 12:14:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:40 np0005580781 python3.9[228557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:41 np0005580781 python3.9[228709]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:42 np0005580781 python3.9[228861]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:42 np0005580781 python3.9[229013]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:43 np0005580781 python3.9[229165]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:44 np0005580781 python3.9[229317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:14:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:14:45 np0005580781 python3.9[229469]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:45 np0005580781 python3.9[229621]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:46 np0005580781 python3.9[229773]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:47 np0005580781 python3.9[229925]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:14:48.915 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:14:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:14:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:14:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:14:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.213822) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289214013, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 993, "num_deletes": 250, "total_data_size": 998251, "memory_usage": 1016448, "flush_reason": "Manual Compaction"}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289223221, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 605797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11160, "largest_seqno": 12152, "table_properties": {"data_size": 601985, "index_size": 1528, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9676, "raw_average_key_size": 19, "raw_value_size": 593796, "raw_average_value_size": 1224, "num_data_blocks": 70, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065188, "oldest_key_time": 1768065188, "file_creation_time": 1768065289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9472 microseconds, and 5860 cpu microseconds.
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.223316) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 605797 bytes OK
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.223342) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.225397) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.225420) EVENT_LOG_v1 {"time_micros": 1768065289225413, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.225467) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 993570, prev total WAL file size 993570, number of live WAL files 2.
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.226367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(591KB)], [29(5945KB)]
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289226496, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6694291, "oldest_snapshot_seqno": -1}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3262 keys, 4966885 bytes, temperature: kUnknown
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289282198, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4966885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4943798, "index_size": 13826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75533, "raw_average_key_size": 23, "raw_value_size": 4884055, "raw_average_value_size": 1497, "num_data_blocks": 614, "num_entries": 3262, "num_filter_entries": 3262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.282492) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4966885 bytes
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.284123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.0 rd, 89.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.8 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(19.2) write-amplify(8.2) OK, records in: 3729, records dropped: 467 output_compression: NoCompression
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.284145) EVENT_LOG_v1 {"time_micros": 1768065289284134, "job": 12, "event": "compaction_finished", "compaction_time_micros": 55806, "compaction_time_cpu_micros": 31389, "output_level": 6, "num_output_files": 1, "total_output_size": 4966885, "num_input_records": 3729, "num_output_records": 3262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289284427, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065289285820, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.226262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:14:49 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:14:49.285942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:14:50 np0005580781 podman[229950]: 2026-01-10 17:14:50.199287807 +0000 UTC m=+0.189907754 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:14:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:53 np0005580781 python3.9[230106]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 10 12:14:54 np0005580781 podman[230207]: 2026-01-10 17:14:54.131189568 +0000 UTC m=+0.129120213 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 10 12:14:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:54 np0005580781 python3.9[230279]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 10 12:14:55 np0005580781 python3.9[230437]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 10 12:14:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:56 np0005580781 systemd-logind[798]: New session 51 of user zuul.
Jan 10 12:14:56 np0005580781 systemd[1]: Started Session 51 of User zuul.
Jan 10 12:14:56 np0005580781 systemd[1]: session-51.scope: Deactivated successfully.
Jan 10 12:14:56 np0005580781 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Jan 10 12:14:56 np0005580781 systemd-logind[798]: Removed session 51.
Jan 10 12:14:57 np0005580781 python3.9[230690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:14:57 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:14:58 np0005580781 podman[230888]: 2026-01-10 17:14:58.382334008 +0000 UTC m=+0.064276285 container create cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 12:14:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:14:58 np0005580781 python3.9[230875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065297.1307986-986-204881653828104/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:14:58 np0005580781 systemd[1]: Started libpod-conmon-cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6.scope.
Jan 10 12:14:58 np0005580781 podman[230888]: 2026-01-10 17:14:58.355486546 +0000 UTC m=+0.037428823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:14:58 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:14:58 np0005580781 podman[230888]: 2026-01-10 17:14:58.485490902 +0000 UTC m=+0.167433239 container init cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 12:14:58 np0005580781 podman[230888]: 2026-01-10 17:14:58.497535991 +0000 UTC m=+0.179478278 container start cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:14:58 np0005580781 podman[230888]: 2026-01-10 17:14:58.502280306 +0000 UTC m=+0.184222633 container attach cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:14:58 np0005580781 competent_pare[230905]: 167 167
Jan 10 12:14:58 np0005580781 systemd[1]: libpod-cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6.scope: Deactivated successfully.
Jan 10 12:14:58 np0005580781 podman[230888]: 2026-01-10 17:14:58.50693709 +0000 UTC m=+0.188879377 container died cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:14:58 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9eeedf1fd6a422b82f33973e9e697a6c0ced01dbb2581b6099ea2ce8561d8577-merged.mount: Deactivated successfully.
Jan 10 12:14:58 np0005580781 podman[230888]: 2026-01-10 17:14:58.558785804 +0000 UTC m=+0.240728091 container remove cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:14:58 np0005580781 systemd[1]: libpod-conmon-cca717fbbf94af8ebb36e107f84b7ac78f362744a54f83a746832f5614626ec6.scope: Deactivated successfully.
Jan 10 12:14:58 np0005580781 podman[230999]: 2026-01-10 17:14:58.791970613 +0000 UTC m=+0.067565491 container create 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:14:58 np0005580781 systemd[1]: Started libpod-conmon-5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc.scope.
Jan 10 12:14:58 np0005580781 podman[230999]: 2026-01-10 17:14:58.76732517 +0000 UTC m=+0.042920088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:14:58 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:14:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:14:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:14:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:14:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:14:58 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:14:58 np0005580781 podman[230999]: 2026-01-10 17:14:58.90956014 +0000 UTC m=+0.185155078 container init 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:14:58 np0005580781 podman[230999]: 2026-01-10 17:14:58.924112875 +0000 UTC m=+0.199707743 container start 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:14:58 np0005580781 podman[230999]: 2026-01-10 17:14:58.928880012 +0000 UTC m=+0.204474880 container attach 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 10 12:14:59 np0005580781 python3.9[231100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:14:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:14:59 np0005580781 eloquent_jennings[231045]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:14:59 np0005580781 eloquent_jennings[231045]: --> All data devices are unavailable
Jan 10 12:14:59 np0005580781 systemd[1]: libpod-5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc.scope: Deactivated successfully.
Jan 10 12:14:59 np0005580781 podman[230999]: 2026-01-10 17:14:59.5808504 +0000 UTC m=+0.856445318 container died 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 10 12:14:59 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1d5eb5200c9ad0fcf875f351a191747a96e39cb1c0c56ebe6c8fe51a555a2510-merged.mount: Deactivated successfully.
Jan 10 12:14:59 np0005580781 podman[230999]: 2026-01-10 17:14:59.63821029 +0000 UTC m=+0.913805168 container remove 5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 12:14:59 np0005580781 systemd[1]: libpod-conmon-5998d546caecc37c8150a75bfa934e6ff5484abfd94fe8bfe27b664dbff8b3fc.scope: Deactivated successfully.
Jan 10 12:14:59 np0005580781 python3.9[231191]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:15:00 np0005580781 podman[231367]: 2026-01-10 17:15:00.255326644 +0000 UTC m=+0.063592566 container create 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 12:15:00 np0005580781 systemd[1]: Started libpod-conmon-4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5.scope.
Jan 10 12:15:00 np0005580781 podman[231367]: 2026-01-10 17:15:00.219804323 +0000 UTC m=+0.028070295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:15:00 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:15:00 np0005580781 podman[231367]: 2026-01-10 17:15:00.354861252 +0000 UTC m=+0.163127224 container init 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:15:00 np0005580781 podman[231367]: 2026-01-10 17:15:00.36572255 +0000 UTC m=+0.173988442 container start 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:15:00 np0005580781 podman[231367]: 2026-01-10 17:15:00.369046938 +0000 UTC m=+0.177312930 container attach 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:15:00 np0005580781 blissful_kalam[231426]: 167 167
Jan 10 12:15:00 np0005580781 systemd[1]: libpod-4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5.scope: Deactivated successfully.
Jan 10 12:15:00 np0005580781 podman[231367]: 2026-01-10 17:15:00.373972568 +0000 UTC m=+0.182238490 container died 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 12:15:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:00 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b21ee734c4ca872e3292b141ce87fd3eb015116b8ad7f4ac04da673ab25ae083-merged.mount: Deactivated successfully.
Jan 10 12:15:00 np0005580781 podman[231367]: 2026-01-10 17:15:00.429558901 +0000 UTC m=+0.237824833 container remove 4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:15:00 np0005580781 systemd[1]: libpod-conmon-4d1642705ddcfaadfad910793239522edb7df4659c3ef1995e3b6d738c3c88f5.scope: Deactivated successfully.
Jan 10 12:15:00 np0005580781 python3.9[231435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:15:00 np0005580781 podman[231457]: 2026-01-10 17:15:00.644787254 +0000 UTC m=+0.044570822 container create 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:15:00 np0005580781 systemd[1]: Started libpod-conmon-4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216.scope.
Jan 10 12:15:00 np0005580781 podman[231457]: 2026-01-10 17:15:00.624128327 +0000 UTC m=+0.023911945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:15:00 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:15:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:00 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:00 np0005580781 podman[231457]: 2026-01-10 17:15:00.771521733 +0000 UTC m=+0.171305321 container init 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:15:00 np0005580781 podman[231457]: 2026-01-10 17:15:00.782110583 +0000 UTC m=+0.181894141 container start 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:15:00 np0005580781 podman[231457]: 2026-01-10 17:15:00.785926405 +0000 UTC m=+0.185710033 container attach 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]: {
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:    "0": [
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:        {
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "devices": [
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "/dev/loop3"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            ],
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_name": "ceph_lv0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_size": "21470642176",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "name": "ceph_lv0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "tags": {
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cluster_name": "ceph",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.crush_device_class": "",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.encrypted": "0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.objectstore": "bluestore",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osd_id": "0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.type": "block",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.vdo": "0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.with_tpm": "0"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            },
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "type": "block",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "vg_name": "ceph_vg0"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:        }
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:    ],
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:    "1": [
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:        {
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "devices": [
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "/dev/loop4"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            ],
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_name": "ceph_lv1",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_size": "21470642176",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "name": "ceph_lv1",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "tags": {
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cluster_name": "ceph",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.crush_device_class": "",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.encrypted": "0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.objectstore": "bluestore",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osd_id": "1",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.type": "block",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.vdo": "0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.with_tpm": "0"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            },
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "type": "block",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "vg_name": "ceph_vg1"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:        }
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:    ],
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:    "2": [
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:        {
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "devices": [
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "/dev/loop5"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            ],
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_name": "ceph_lv2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_size": "21470642176",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "name": "ceph_lv2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "tags": {
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.cluster_name": "ceph",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.crush_device_class": "",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.encrypted": "0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.objectstore": "bluestore",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osd_id": "2",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.type": "block",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.vdo": "0",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:                "ceph.with_tpm": "0"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            },
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "type": "block",
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:            "vg_name": "ceph_vg2"
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:        }
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]:    ]
Jan 10 12:15:01 np0005580781 gifted_cartwright[231510]: }
Jan 10 12:15:01 np0005580781 systemd[1]: libpod-4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216.scope: Deactivated successfully.
Jan 10 12:15:01 np0005580781 podman[231604]: 2026-01-10 17:15:01.15780412 +0000 UTC m=+0.025476176 container died 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:15:01 np0005580781 python3.9[231600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065299.9343207-986-270210907857195/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:15:01 np0005580781 systemd[1]: var-lib-containers-storage-overlay-31063605dc7d8721ceb7b262a52af319c16a11c2aaf41863ddbde71669a300f0-merged.mount: Deactivated successfully.
Jan 10 12:15:01 np0005580781 podman[231604]: 2026-01-10 17:15:01.191261146 +0000 UTC m=+0.058933192 container remove 4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 12:15:01 np0005580781 systemd[1]: libpod-conmon-4c27d1a58186d8dcddb286e018b8cb4d5ab8c113bf4608ccb54932f3e346d216.scope: Deactivated successfully.
Jan 10 12:15:01 np0005580781 podman[231830]: 2026-01-10 17:15:01.646004848 +0000 UTC m=+0.047633944 container create 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:15:01 np0005580781 systemd[1]: Started libpod-conmon-03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443.scope.
Jan 10 12:15:01 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:15:01 np0005580781 podman[231830]: 2026-01-10 17:15:01.624916719 +0000 UTC m=+0.026545775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:15:01 np0005580781 podman[231830]: 2026-01-10 17:15:01.745104244 +0000 UTC m=+0.146733360 container init 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 12:15:01 np0005580781 podman[231830]: 2026-01-10 17:15:01.75252442 +0000 UTC m=+0.154153526 container start 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 12:15:01 np0005580781 podman[231830]: 2026-01-10 17:15:01.756317151 +0000 UTC m=+0.157946277 container attach 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:15:01 np0005580781 loving_archimedes[231847]: 167 167
Jan 10 12:15:01 np0005580781 systemd[1]: libpod-03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443.scope: Deactivated successfully.
Jan 10 12:15:01 np0005580781 podman[231830]: 2026-01-10 17:15:01.761761035 +0000 UTC m=+0.163390091 container died 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 10 12:15:01 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0f4be5cf6c2104ccc666cabd35ecec5fef40f687bba64505a8380c80f0185608-merged.mount: Deactivated successfully.
Jan 10 12:15:01 np0005580781 python3.9[231832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:15:01 np0005580781 podman[231830]: 2026-01-10 17:15:01.805198136 +0000 UTC m=+0.206827192 container remove 03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:15:01 np0005580781 systemd[1]: libpod-conmon-03c2911332b7637af9afdc6686148da7b82c528f32adabe0a8873861e9976443.scope: Deactivated successfully.
Jan 10 12:15:01 np0005580781 podman[231915]: 2026-01-10 17:15:01.983568573 +0000 UTC m=+0.049033800 container create 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 12:15:02 np0005580781 systemd[1]: Started libpod-conmon-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope.
Jan 10 12:15:02 np0005580781 podman[231915]: 2026-01-10 17:15:01.963099171 +0000 UTC m=+0.028564418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:15:02 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:15:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:02 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:02 np0005580781 podman[231915]: 2026-01-10 17:15:02.114940405 +0000 UTC m=+0.180405682 container init 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:15:02 np0005580781 podman[231915]: 2026-01-10 17:15:02.127634061 +0000 UTC m=+0.193099298 container start 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:15:02 np0005580781 podman[231915]: 2026-01-10 17:15:02.132210083 +0000 UTC m=+0.197675330 container attach 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:15:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:02 np0005580781 python3.9[232014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065301.3100128-986-55411747427617/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:15:02 np0005580781 lvm[232237]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:15:02 np0005580781 lvm[232237]: VG ceph_vg1 finished
Jan 10 12:15:02 np0005580781 lvm[232233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:15:02 np0005580781 lvm[232240]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:15:02 np0005580781 lvm[232240]: VG ceph_vg2 finished
Jan 10 12:15:02 np0005580781 lvm[232233]: VG ceph_vg0 finished
Jan 10 12:15:02 np0005580781 clever_montalcini[231960]: {}
Jan 10 12:15:02 np0005580781 systemd[1]: libpod-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope: Deactivated successfully.
Jan 10 12:15:03 np0005580781 systemd[1]: libpod-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope: Consumed 1.471s CPU time.
Jan 10 12:15:03 np0005580781 podman[231915]: 2026-01-10 17:15:03.000892954 +0000 UTC m=+1.066358181 container died 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:15:03 np0005580781 systemd[1]: var-lib-containers-storage-overlay-39e5e8a223f4f4cb1a3de8d0d43176f215c1a359b8b46e45f8675a57479827b8-merged.mount: Deactivated successfully.
Jan 10 12:15:03 np0005580781 podman[231915]: 2026-01-10 17:15:03.057549095 +0000 UTC m=+1.123014302 container remove 97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 10 12:15:03 np0005580781 systemd[1]: libpod-conmon-97c15b90c2c09b5855d4e56daf2cd8e013cc6c9fd8630c86596de24361c1fdbd.scope: Deactivated successfully.
Jan 10 12:15:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:15:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:15:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:15:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:15:03 np0005580781 python3.9[232241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:15:03 np0005580781 python3.9[232400]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065302.5831914-986-40725975583737/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:15:03 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:15:03 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:15:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:04 np0005580781 python3.9[232550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:15:05 np0005580781 python3.9[232671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065303.8301475-986-40380494422430/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:15:05 np0005580781 python3.9[232823]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:15:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:06 np0005580781 python3.9[232975]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:15:07 np0005580781 python3.9[233127]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:15:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:08 np0005580781 python3.9[233279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:15:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:15:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:15:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:15:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:15:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:15:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:15:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:09 np0005580781 python3.9[233402]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1768065307.9722266-1093-106154326139438/.source _original_basename=.jixremqj follow=False checksum=b4fbc2aa16e07d05be4a21268488f740d2389779 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 10 12:15:10 np0005580781 python3.9[233554]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:15:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:10 np0005580781 python3.9[233706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:15:11 np0005580781 python3.9[233827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065310.3299131-1119-207001563854561/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:15:12 np0005580781 python3.9[233977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 10 12:15:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:12 np0005580781 python3.9[234098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768065311.7035794-1134-245197114537769/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 10 12:15:14 np0005580781 python3.9[234250]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 10 12:15:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:15 np0005580781 python3.9[234402]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 12:15:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:16 np0005580781 python3[234554]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 12:15:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:22 np0005580781 podman[234610]: 2026-01-10 17:15:22.072901747 +0000 UTC m=+1.071834055 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Jan 10 12:15:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:26 np0005580781 podman[234653]: 2026-01-10 17:15:26.456359562 +0000 UTC m=+1.442544688 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 10 12:15:26 np0005580781 podman[234569]: 2026-01-10 17:15:26.510945459 +0000 UTC m=+9.699653459 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 10 12:15:26 np0005580781 podman[234697]: 2026-01-10 17:15:26.760360259 +0000 UTC m=+0.083433222 container create 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 10 12:15:26 np0005580781 podman[234697]: 2026-01-10 17:15:26.725475964 +0000 UTC m=+0.048548957 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 10 12:15:26 np0005580781 python3[234554]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 10 12:15:27 np0005580781 python3.9[234887]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:15:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:29 np0005580781 python3.9[235041]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 10 12:15:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:30 np0005580781 python3.9[235193]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 10 12:15:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:31 np0005580781 python3[235345]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 10 12:15:31 np0005580781 podman[235384]: 2026-01-10 17:15:31.782417808 +0000 UTC m=+0.075844451 container create 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251202, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:15:31 np0005580781 podman[235384]: 2026-01-10 17:15:31.746770154 +0000 UTC m=+0.040196847 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 10 12:15:31 np0005580781 python3[235345]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 10 12:15:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:32 np0005580781 python3.9[235574]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:15:33 np0005580781 python3.9[235728]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:15:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:34 np0005580781 python3.9[235879]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768065333.9501176-1230-255364035124942/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 10 12:15:35 np0005580781 python3.9[235955]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 10 12:15:35 np0005580781 systemd[1]: Reloading.
Jan 10 12:15:35 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:15:35 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:15:36 np0005580781 python3.9[236066]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 10 12:15:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:37 np0005580781 systemd[1]: Reloading.
Jan 10 12:15:37 np0005580781 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 10 12:15:37 np0005580781 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 10 12:15:37 np0005580781 systemd[1]: Starting nova_compute container...
Jan 10 12:15:37 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:15:37 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:37 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:37 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:37 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:37 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:37 np0005580781 podman[236106]: 2026-01-10 17:15:37.924506789 +0000 UTC m=+0.141826590 container init 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 10 12:15:37 np0005580781 podman[236106]: 2026-01-10 17:15:37.974398391 +0000 UTC m=+0.191718062 container start 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Jan 10 12:15:37 np0005580781 podman[236106]: nova_compute
Jan 10 12:15:37 np0005580781 nova_compute[236122]: + sudo -E kolla_set_configs
Jan 10 12:15:37 np0005580781 systemd[1]: Started nova_compute container.
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:15:38
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta']
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Validating config file
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying service configuration files
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Deleting /etc/ceph
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Creating directory /etc/ceph
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Writing out command to execute
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:38 np0005580781 nova_compute[236122]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 12:15:38 np0005580781 nova_compute[236122]: ++ cat /run_command
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + CMD=nova-compute
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + ARGS=
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + sudo kolla_copy_cacerts
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + [[ ! -n '' ]]
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + . kolla_extend_start
Jan 10 12:15:38 np0005580781 nova_compute[236122]: Running command: 'nova-compute'
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + echo 'Running command: '\''nova-compute'\'''
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + umask 0022
Jan 10 12:15:38 np0005580781 nova_compute[236122]: + exec nova-compute
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:15:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:15:39 np0005580781 python3.9[236283]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:15:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:15:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:15:39 np0005580781 python3.9[236434]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:15:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:40 np0005580781 nova_compute[236122]: 2026-01-10 17:15:40.611 236126 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 10 12:15:40 np0005580781 nova_compute[236122]: 2026-01-10 17:15:40.611 236126 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 10 12:15:40 np0005580781 nova_compute[236122]: 2026-01-10 17:15:40.611 236126 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 10 12:15:40 np0005580781 nova_compute[236122]: 2026-01-10 17:15:40.612 236126 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 10 12:15:40 np0005580781 nova_compute[236122]: 2026-01-10 17:15:40.749 236126 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:15:40 np0005580781 nova_compute[236122]: 2026-01-10 17:15:40.780 236126 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:15:40 np0005580781 nova_compute[236122]: 2026-01-10 17:15:40.781 236126 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 10 12:15:40 np0005580781 python3.9[236586]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.465 236126 INFO nova.virt.driver [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.716 236126 INFO nova.compute.provider_config [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.731 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.731 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.732 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.733 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.734 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.735 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.736 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.737 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.738 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.739 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.740 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.741 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.742 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.743 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.744 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.745 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.746 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.747 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.748 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.749 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.750 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.751 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.752 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.753 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.754 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.755 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.756 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.757 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.758 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.759 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.760 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.761 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.762 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.763 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.764 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.765 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.766 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.767 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.768 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.769 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.770 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.771 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.772 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.773 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.774 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.775 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.776 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.777 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.778 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.779 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.780 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.781 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.782 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.783 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.784 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.785 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.786 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.787 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.788 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.789 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.790 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.791 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.792 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.793 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.794 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.795 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.796 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.797 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.798 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.799 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.800 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.801 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.802 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.803 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.804 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.805 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.806 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.807 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.808 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.809 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.810 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.811 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.812 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.813 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 WARNING oslo_config.cfg [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 10 12:15:41 np0005580781 nova_compute[236122]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 10 12:15:41 np0005580781 nova_compute[236122]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 10 12:15:41 np0005580781 nova_compute[236122]: and ``live_migration_inbound_addr`` respectively.
Jan 10 12:15:41 np0005580781 nova_compute[236122]: ).  Its value may be silently ignored in the future.#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.814 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.815 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.816 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_secret_uuid        = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.817 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.818 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.819 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.820 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.821 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.822 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.823 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.824 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.825 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.826 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.827 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.828 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.829 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.830 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.831 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.832 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.833 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.834 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.835 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.836 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.837 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.838 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.839 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.840 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.841 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.842 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.843 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.843 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.844 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.844 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.845 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.845 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.846 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.846 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.846 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.847 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.848 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.849 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.850 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.851 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.852 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.853 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.854 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.855 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.856 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.857 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.858 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.859 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.860 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.861 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.862 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.863 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.864 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.865 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.866 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.867 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.868 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.869 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.870 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.871 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.872 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.873 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.879 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.879 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.879 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.880 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.881 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.882 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.883 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.884 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.885 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.886 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.887 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.888 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.889 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.890 236126 DEBUG oslo_service.service [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.891 236126 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.913 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.914 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.914 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 10 12:15:41 np0005580781 nova_compute[236122]: 2026-01-10 17:15:41.915 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 10 12:15:41 np0005580781 python3.9[236740]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 10 12:15:41 np0005580781 systemd[1]: Starting libvirt QEMU daemon...
Jan 10 12:15:41 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 12:15:41 np0005580781 systemd[1]: Started libvirt QEMU daemon.
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 2026-01-10 17:15:42.006 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f65352c0ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 2026-01-10 17:15:42.008 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f65352c0ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 2026-01-10 17:15:42.010 236126 INFO nova.virt.libvirt.driver [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 10 12:15:42 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 2026-01-10 17:15:42.043 236126 WARNING nova.virt.libvirt.driver [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 2026-01-10 17:15:42.043 236126 DEBUG nova.virt.libvirt.volume.mount [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 10 12:15:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 2026-01-10 17:15:42.969 236126 INFO nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host capabilities <capabilities>
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 
Jan 10 12:15:42 np0005580781 nova_compute[236122]:  <host>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <uuid>a9d7d544-72dd-4b08-9e5e-495057bde287</uuid>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <cpu>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <arch>x86_64</arch>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <model>EPYC-Rome-v4</model>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <vendor>AMD</vendor>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <microcode version='16777317'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <signature family='23' model='49' stepping='0'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='x2apic'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='tsc-deadline'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='osxsave'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='hypervisor'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='tsc_adjust'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='spec-ctrl'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='stibp'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='arch-capabilities'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='ssbd'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='cmp_legacy'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='topoext'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='virt-ssbd'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='lbrv'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='tsc-scale'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='vmcb-clean'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='pause-filter'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='pfthreshold'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='svme-addr-chk'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='rdctl-no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='skip-l1dfl-vmentry'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='mds-no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <feature name='pschange-mc-no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <pages unit='KiB' size='4'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <pages unit='KiB' size='2048'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <pages unit='KiB' size='1048576'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </cpu>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <power_management>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <suspend_mem/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </power_management>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <iommu support='no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <migration_features>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <live/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <uri_transports>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:        <uri_transport>tcp</uri_transport>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:        <uri_transport>rdma</uri_transport>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      </uri_transports>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </migration_features>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <topology>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <cells num='1'>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:        <cell id='0'>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          <memory unit='KiB'>7864312</memory>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          <pages unit='KiB' size='2048'>0</pages>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          <distances>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <sibling id='0' value='10'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          </distances>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          <cpus num='8'>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:          </cpus>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:        </cell>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      </cells>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </topology>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <cache>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </cache>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <secmodel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <model>selinux</model>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <doi>0</doi>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </secmodel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <secmodel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <model>dac</model>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <doi>0</doi>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </secmodel>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:  </host>
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 
Jan 10 12:15:42 np0005580781 nova_compute[236122]:  <guest>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <os_type>hvm</os_type>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <arch name='i686'>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <wordsize>32</wordsize>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <domain type='qemu'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <domain type='kvm'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </arch>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <features>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <pae/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <nonpae/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <acpi default='on' toggle='yes'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <apic default='on' toggle='no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <cpuselection/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <deviceboot/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <disksnapshot default='on' toggle='no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <externalSnapshot/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </features>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:  </guest>
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 
Jan 10 12:15:42 np0005580781 nova_compute[236122]:  <guest>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <os_type>hvm</os_type>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <arch name='x86_64'>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <wordsize>64</wordsize>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <domain type='qemu'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <domain type='kvm'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </arch>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    <features>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <acpi default='on' toggle='yes'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <apic default='on' toggle='no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <cpuselection/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <deviceboot/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <disksnapshot default='on' toggle='no'/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:      <externalSnapshot/>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:    </features>
Jan 10 12:15:42 np0005580781 nova_compute[236122]:  </guest>
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 
Jan 10 12:15:42 np0005580781 nova_compute[236122]: </capabilities>
Jan 10 12:15:42 np0005580781 nova_compute[236122]: #033[00m
Jan 10 12:15:42 np0005580781 python3.9[236973]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 10 12:15:42 np0005580781 nova_compute[236122]: 2026-01-10 17:15:42.983 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 10 12:15:43 np0005580781 nova_compute[236122]: 2026-01-10 17:15:43.008 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 10 12:15:43 np0005580781 nova_compute[236122]: <domainCapabilities>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <path>/usr/libexec/qemu-kvm</path>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <domain>kvm</domain>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <arch>i686</arch>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <vcpu max='4096'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <iothreads supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <os supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <enum name='firmware'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <loader supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>rom</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pflash</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='readonly'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>yes</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>no</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='secure'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>no</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </loader>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </os>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <cpu>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='host-passthrough' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='hostPassthroughMigratable'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>on</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>off</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='maximum' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='maximumMigratable'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>on</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>off</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='host-model' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <vendor>AMD</vendor>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='x2apic'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc-deadline'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='hypervisor'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc_adjust'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='spec-ctrl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='stibp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='cmp_legacy'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='overflow-recov'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='succor'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='amd-ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='virt-ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='lbrv'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc-scale'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='vmcb-clean'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='flushbyasid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='pause-filter'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='pfthreshold'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='svme-addr-chk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='disable' name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='custom' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Dhyana-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Genoa'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='auto-ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Genoa-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='auto-ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-128'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-256'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-512'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v6'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v7'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='KnightsMill'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512er'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512pf'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='KnightsMill-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512er'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512pf'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 12:15:43 np0005580781 systemd[1]: Stopping nova_compute container...
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G4-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tbm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G5-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tbm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SierraForest'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cmpccxadd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SierraForest-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cmpccxadd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='athlon'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='athlon-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='core2duo'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='core2duo-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='coreduo'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='coreduo-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='n270'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='n270-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='phenom'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='phenom-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </cpu>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <memoryBacking supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <enum name='sourceType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>file</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>anonymous</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>memfd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </memoryBacking>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <devices>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <disk supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='diskDevice'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>disk</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>cdrom</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>floppy</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>lun</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='bus'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>fdc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>scsi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>sata</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-non-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </disk>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <graphics supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vnc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>egl-headless</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dbus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </graphics>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <video supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='modelType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vga</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>cirrus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>none</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>bochs</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>ramfb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </video>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <hostdev supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='mode'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>subsystem</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='startupPolicy'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>default</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>mandatory</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>requisite</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>optional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='subsysType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pci</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>scsi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='capsType'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='pciBackend'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </hostdev>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <rng supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-non-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>random</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>egd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>builtin</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </rng>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <filesystem supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='driverType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>path</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>handle</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtiofs</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </filesystem>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <tpm supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tpm-tis</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tpm-crb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>emulator</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>external</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendVersion'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>2.0</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </tpm>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <redirdev supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='bus'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </redirdev>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <channel supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pty</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>unix</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </channel>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <crypto supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>qemu</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>builtin</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </crypto>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <interface supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>default</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>passt</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </interface>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <panic supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>isa</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>hyperv</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </panic>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <console supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>null</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pty</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dev</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>file</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pipe</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>stdio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>udp</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tcp</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>unix</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>qemu-vdagent</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dbus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </console>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </devices>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <features>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <gic supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <vmcoreinfo supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <genid supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <backingStoreInput supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <backup supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <async-teardown supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <ps2 supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <sev supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <sgx supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <hyperv supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='features'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>relaxed</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vapic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>spinlocks</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vpindex</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>runtime</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>synic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>stimer</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>reset</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vendor_id</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>frequencies</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>reenlightenment</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tlbflush</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>ipi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>avic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>emsr_bitmap</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>xmm_input</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <defaults>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <spinlocks>4095</spinlocks>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <stimer_direct>on</stimer_direct>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <tlbflush_direct>on</tlbflush_direct>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <tlbflush_extended>on</tlbflush_extended>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </defaults>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </hyperv>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <launchSecurity supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='sectype'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tdx</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </launchSecurity>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </features>
Jan 10 12:15:43 np0005580781 nova_compute[236122]: </domainCapabilities>
Jan 10 12:15:43 np0005580781 nova_compute[236122]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 10 12:15:43 np0005580781 nova_compute[236122]: 2026-01-10 17:15:43.014 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 10 12:15:43 np0005580781 nova_compute[236122]: <domainCapabilities>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <path>/usr/libexec/qemu-kvm</path>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <domain>kvm</domain>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <arch>i686</arch>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <vcpu max='240'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <iothreads supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <os supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <enum name='firmware'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <loader supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>rom</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pflash</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='readonly'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>yes</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>no</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='secure'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>no</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </loader>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </os>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <cpu>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='host-passthrough' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='hostPassthroughMigratable'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>on</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>off</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='maximum' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='maximumMigratable'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>on</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>off</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='host-model' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <vendor>AMD</vendor>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='x2apic'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc-deadline'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='hypervisor'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc_adjust'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='spec-ctrl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='stibp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='cmp_legacy'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='overflow-recov'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='succor'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='amd-ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='virt-ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='lbrv'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc-scale'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='vmcb-clean'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='flushbyasid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='pause-filter'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='pfthreshold'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='svme-addr-chk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='disable' name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='custom' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Dhyana-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Genoa'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='auto-ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Genoa-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='auto-ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-128'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-256'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-512'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v6'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v7'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='KnightsMill'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512er'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512pf'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='KnightsMill-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512er'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512pf'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G4-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tbm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G5-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tbm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SierraForest'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cmpccxadd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SierraForest-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cmpccxadd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='athlon'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='athlon-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='core2duo'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='core2duo-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='coreduo'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='coreduo-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='n270'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='n270-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='phenom'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='phenom-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </cpu>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <memoryBacking supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <enum name='sourceType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>file</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>anonymous</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>memfd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </memoryBacking>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <devices>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <disk supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='diskDevice'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>disk</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>cdrom</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>floppy</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>lun</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='bus'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>ide</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>fdc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>scsi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>sata</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-non-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </disk>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <graphics supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vnc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>egl-headless</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dbus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </graphics>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <video supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='modelType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vga</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>cirrus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>none</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>bochs</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>ramfb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </video>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <hostdev supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='mode'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>subsystem</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='startupPolicy'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>default</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>mandatory</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>requisite</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>optional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='subsysType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pci</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>scsi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='capsType'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='pciBackend'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </hostdev>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <rng supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-non-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>random</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>egd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>builtin</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </rng>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <filesystem supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='driverType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>path</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>handle</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtiofs</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </filesystem>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <tpm supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tpm-tis</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tpm-crb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>emulator</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>external</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendVersion'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>2.0</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </tpm>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <redirdev supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='bus'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </redirdev>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <channel supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pty</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>unix</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </channel>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <crypto supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>qemu</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>builtin</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </crypto>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <interface supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>default</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>passt</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </interface>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <panic supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>isa</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>hyperv</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </panic>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <console supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>null</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pty</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dev</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>file</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pipe</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>stdio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>udp</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tcp</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>unix</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>qemu-vdagent</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dbus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </console>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </devices>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <features>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <gic supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <vmcoreinfo supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <genid supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <backingStoreInput supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <backup supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <async-teardown supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <ps2 supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <sev supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <sgx supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <hyperv supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='features'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>relaxed</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vapic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>spinlocks</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vpindex</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>runtime</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>synic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>stimer</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>reset</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vendor_id</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>frequencies</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>reenlightenment</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tlbflush</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>ipi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>avic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>emsr_bitmap</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>xmm_input</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <defaults>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <spinlocks>4095</spinlocks>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <stimer_direct>on</stimer_direct>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <tlbflush_direct>on</tlbflush_direct>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <tlbflush_extended>on</tlbflush_extended>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </defaults>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </hyperv>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <launchSecurity supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='sectype'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tdx</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </launchSecurity>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </features>
Jan 10 12:15:43 np0005580781 nova_compute[236122]: </domainCapabilities>
Jan 10 12:15:43 np0005580781 nova_compute[236122]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 10 12:15:43 np0005580781 nova_compute[236122]: 2026-01-10 17:15:43.054 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 10 12:15:43 np0005580781 nova_compute[236122]: 2026-01-10 17:15:43.058 236126 DEBUG nova.virt.libvirt.host [None req-f80cf4d5-4687-42f0-a2f3-9029deeb90f8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 10 12:15:43 np0005580781 nova_compute[236122]: <domainCapabilities>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <path>/usr/libexec/qemu-kvm</path>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <domain>kvm</domain>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <arch>x86_64</arch>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <vcpu max='4096'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <iothreads supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <os supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <enum name='firmware'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>efi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <loader supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>rom</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pflash</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='readonly'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>yes</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>no</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='secure'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>yes</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>no</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </loader>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </os>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <cpu>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='host-passthrough' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='hostPassthroughMigratable'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>on</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>off</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='maximum' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='maximumMigratable'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>on</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>off</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='host-model' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <vendor>AMD</vendor>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='x2apic'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc-deadline'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='hypervisor'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc_adjust'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='spec-ctrl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='stibp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='cmp_legacy'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='overflow-recov'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='succor'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='amd-ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='virt-ssbd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='lbrv'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='tsc-scale'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='vmcb-clean'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='flushbyasid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='pause-filter'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='pfthreshold'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='svme-addr-chk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <feature policy='disable' name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <mode name='custom' supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Broadwell-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cascadelake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Cooperlake-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Denverton-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Dhyana-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Genoa'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='auto-ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Genoa-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='auto-ibrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Milan-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amd-psfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='stibp-always-on'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-Rome-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='EPYC-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='GraniteRapids-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-128'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-256'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx10-512'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='prefetchiti'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Haswell-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-noTSX'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v6'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Icelake-Server-v7'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='IvyBridge-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='KnightsMill'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512er'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512pf'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='KnightsMill-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512er'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512pf'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G4-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tbm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Opteron_G5-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fma4'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tbm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xop'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SapphireRapids-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='amx-tile'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-bf16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-fp16'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bitalg'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrc'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fzrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='la57'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='taa-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xfd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SierraForest'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cmpccxadd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='SierraForest-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ifma'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cmpccxadd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fbsdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='fsrs'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ibrs-all'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mcdt-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pbrsb-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='psdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='serialize'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vaes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Client-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='hle'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='rtm'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Skylake-Server-v5'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512bw'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512cd'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512dq'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512f'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='avx512vl'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='invpcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pcid'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='pku'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='mpx'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v2'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v3'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='core-capability'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='split-lock-detect'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='Snowridge-v4'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='cldemote'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='erms'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='gfni'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdir64b'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='movdiri'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='xsaves'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='athlon'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='athlon-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='core2duo'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='core2duo-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='coreduo'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='coreduo-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='n270'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='n270-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='ss'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='phenom'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <blockers model='phenom-v1'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnow'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <feature name='3dnowext'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </blockers>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </mode>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </cpu>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <memoryBacking supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <enum name='sourceType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>file</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>anonymous</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <value>memfd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </memoryBacking>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <devices>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <disk supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='diskDevice'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>disk</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>cdrom</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>floppy</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>lun</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='bus'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>fdc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>scsi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>sata</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-non-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </disk>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <graphics supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vnc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>egl-headless</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dbus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </graphics>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <video supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='modelType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vga</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>cirrus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>none</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>bochs</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>ramfb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </video>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <hostdev supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='mode'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>subsystem</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='startupPolicy'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>default</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>mandatory</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>requisite</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>optional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='subsysType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pci</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>scsi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='capsType'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='pciBackend'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </hostdev>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <rng supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtio-non-transitional</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>random</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>egd</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>builtin</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </rng>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <filesystem supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='driverType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>path</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>handle</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>virtiofs</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </filesystem>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <tpm supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tpm-tis</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tpm-crb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>emulator</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>external</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendVersion'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>2.0</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </tpm>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <redirdev supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='bus'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>usb</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </redirdev>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <channel supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pty</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>unix</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </channel>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <crypto supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>qemu</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendModel'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>builtin</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </crypto>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <interface supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='backendType'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>default</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>passt</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </interface>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <panic supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='model'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>isa</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>hyperv</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </panic>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <console supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='type'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>null</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vc</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pty</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dev</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>file</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>pipe</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>stdio</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>udp</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tcp</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>unix</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>qemu-vdagent</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>dbus</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </console>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </devices>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  <features>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <gic supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <vmcoreinfo supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <genid supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <backingStoreInput supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <backup supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <async-teardown supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <ps2 supported='yes'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <sev supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <sgx supported='no'/>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <hyperv supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='features'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>relaxed</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vapic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>spinlocks</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vpindex</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>runtime</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>synic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>stimer</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>reset</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>vendor_id</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>frequencies</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>reenlightenment</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tlbflush</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>ipi</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>avic</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>emsr_bitmap</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>xmm_input</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <defaults>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <spinlocks>4095</spinlocks>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <stimer_direct>on</stimer_direct>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <tlbflush_direct>on</tlbflush_direct>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <tlbflush_extended>on</tlbflush_extended>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </defaults>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </hyperv>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    <launchSecurity supported='yes'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      <enum name='sectype'>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:        <value>tdx</value>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:      </enum>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:    </launchSecurity>
Jan 10 12:15:43 np0005580781 nova_compute[236122]:  </features>
Jan 10 12:15:43 np0005580781 nova_compute[236122]: </domainCapabilities>
Jan 10 12:15:43 np0005580781 nova_compute[236122]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 10 12:15:43 np0005580781 nova_compute[236122]: 2026-01-10 17:15:43.116 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 10 12:15:43 np0005580781 nova_compute[236122]: 2026-01-10 17:15:43.121 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 10 12:15:43 np0005580781 nova_compute[236122]: 2026-01-10 17:15:43.122 236126 DEBUG oslo_concurrency.lockutils [None req-9f2d43d1-82d7-45a5-ad54-c3917c51ea45 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 10 12:15:43 np0005580781 virtqemud[236762]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 10 12:15:43 np0005580781 virtqemud[236762]: hostname: compute-0
Jan 10 12:15:43 np0005580781 virtqemud[236762]: End of file while reading data: Input/output error
Jan 10 12:15:43 np0005580781 systemd[1]: libpod-8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02.scope: Deactivated successfully.
Jan 10 12:15:43 np0005580781 systemd[1]: libpod-8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02.scope: Consumed 3.500s CPU time.
Jan 10 12:15:43 np0005580781 podman[236981]: 2026-01-10 17:15:43.523821425 +0000 UTC m=+0.477849444 container died 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute)
Jan 10 12:15:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02-userdata-shm.mount: Deactivated successfully.
Jan 10 12:15:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay-52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7-merged.mount: Deactivated successfully.
Jan 10 12:15:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:15:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:15:45 np0005580781 podman[236981]: 2026-01-10 17:15:45.606449265 +0000 UTC m=+2.560477314 container cleanup 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible)
Jan 10 12:15:45 np0005580781 podman[236981]: nova_compute
Jan 10 12:15:45 np0005580781 podman[237018]: nova_compute
Jan 10 12:15:45 np0005580781 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 10 12:15:45 np0005580781 systemd[1]: Stopped nova_compute container.
Jan 10 12:15:45 np0005580781 systemd[1]: Starting nova_compute container...
Jan 10 12:15:45 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:15:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52eb7d54ee1a3effee233654d289e0ab9b595d43483ba376afc253a0cb5086a7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:45 np0005580781 podman[237032]: 2026-01-10 17:15:45.894579136 +0000 UTC m=+0.135187824 container init 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 10 12:15:45 np0005580781 podman[237032]: 2026-01-10 17:15:45.907753163 +0000 UTC m=+0.148361811 container start 8f8874914a56179fcc5831574e1cc112fdac465b9ddd5d3ee5069e9a44f58d02 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 10 12:15:45 np0005580781 podman[237032]: nova_compute
Jan 10 12:15:45 np0005580781 nova_compute[237049]: + sudo -E kolla_set_configs
Jan 10 12:15:45 np0005580781 systemd[1]: Started nova_compute container.
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Validating config file
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying service configuration files
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /etc/ceph
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Creating directory /etc/ceph
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Writing out command to execute
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:46 np0005580781 nova_compute[237049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 10 12:15:46 np0005580781 nova_compute[237049]: ++ cat /run_command
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + CMD=nova-compute
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + ARGS=
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + sudo kolla_copy_cacerts
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + [[ ! -n '' ]]
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + . kolla_extend_start
Jan 10 12:15:46 np0005580781 nova_compute[237049]: Running command: 'nova-compute'
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + echo 'Running command: '\''nova-compute'\'''
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + umask 0022
Jan 10 12:15:46 np0005580781 nova_compute[237049]: + exec nova-compute
Jan 10 12:15:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:47 np0005580781 python3.9[237212]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 10 12:15:47 np0005580781 systemd[1]: Started libpod-conmon-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd.scope.
Jan 10 12:15:47 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:15:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:47 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 10 12:15:47 np0005580781 podman[237236]: 2026-01-10 17:15:47.314482131 +0000 UTC m=+0.140088440 container init 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 10 12:15:47 np0005580781 podman[237236]: 2026-01-10 17:15:47.321814586 +0000 UTC m=+0.147420885 container start 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2)
Jan 10 12:15:47 np0005580781 python3.9[237212]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Applying nova statedir ownership
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 10 12:15:47 np0005580781 nova_compute_init[237257]: INFO:nova_statedir:Nova statedir ownership complete
Jan 10 12:15:47 np0005580781 systemd[1]: libpod-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd.scope: Deactivated successfully.
Jan 10 12:15:47 np0005580781 podman[237258]: 2026-01-10 17:15:47.413377684 +0000 UTC m=+0.052447111 container died 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 10 12:15:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay-5c546114f17d1c4a40376cc9ffd809fc39eb03c4df86f85b95bcda46b001bcfd-merged.mount: Deactivated successfully.
Jan 10 12:15:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd-userdata-shm.mount: Deactivated successfully.
Jan 10 12:15:47 np0005580781 podman[237265]: 2026-01-10 17:15:47.466235776 +0000 UTC m=+0.072266183 container cleanup 829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 10 12:15:47 np0005580781 systemd[1]: libpod-conmon-829794c073326f89be46fc607171dd9fff823b74d404292c89250303cc4e08fd.scope: Deactivated successfully.
Jan 10 12:15:48 np0005580781 systemd[1]: session-50.scope: Deactivated successfully.
Jan 10 12:15:48 np0005580781 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Jan 10 12:15:48 np0005580781 systemd[1]: session-50.scope: Consumed 2min 21.093s CPU time.
Jan 10 12:15:48 np0005580781 systemd-logind[798]: Removed session 50.
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.127 237053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.127 237053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.127 237053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.128 237053 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.294 237053 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.323 237053 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.324 237053 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 10 12:15:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:48 np0005580781 nova_compute[237049]: 2026-01-10 17:15:48.900 237053 INFO nova.virt.driver [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 10 12:15:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:15:48.917 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:15:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:15:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:15:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:15:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.022 237053 INFO nova.compute.provider_config [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.038 237053 DEBUG oslo_concurrency.lockutils [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.038 237053 DEBUG oslo_concurrency.lockutils [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_concurrency.lockutils [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.039 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.040 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.041 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.042 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.043 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.044 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.045 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.046 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.047 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.048 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.049 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.050 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.051 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.052 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.053 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.054 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.055 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.056 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.057 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.058 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.059 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.060 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.061 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.062 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.063 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.064 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.065 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.066 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.067 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.068 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.069 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.070 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.071 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.072 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.073 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.074 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.075 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.076 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.077 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.078 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.079 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.080 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.081 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.082 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.083 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.084 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.085 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.086 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.087 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.088 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.089 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.090 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.091 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.092 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.093 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.094 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.095 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.096 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.097 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.098 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.099 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.100 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.101 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.102 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.103 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.104 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.105 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.106 237053 WARNING oslo_config.cfg [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 10 12:15:49 np0005580781 nova_compute[237049]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 10 12:15:49 np0005580781 nova_compute[237049]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 10 12:15:49 np0005580781 nova_compute[237049]: and ``live_migration_inbound_addr`` respectively.
Jan 10 12:15:49 np0005580781 nova_compute[237049]: ).  Its value may be silently ignored in the future.#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.107 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.108 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_secret_uuid        = a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.109 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.110 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.111 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.112 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.113 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.114 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.115 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.116 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.117 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.118 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.119 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.120 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.121 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.122 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.123 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.124 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.125 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.126 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.127 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.128 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.129 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.130 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.131 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.132 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.133 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.134 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.135 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.136 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.137 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.138 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.139 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.140 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.141 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.142 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.143 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.144 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.145 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.146 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.147 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.148 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.149 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.150 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.151 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.152 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.153 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.154 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.155 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.156 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.157 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.158 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.159 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.160 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.161 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.161 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.161 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.162 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.163 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.164 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.164 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.164 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.165 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.166 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.167 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.168 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.168 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.168 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.169 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.170 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.171 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.172 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.173 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.173 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.173 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.174 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.175 237053 DEBUG oslo_service.service [None req-c9f199b3-cd15-4382-810d-2caa611c7271 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.176 237053 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.198 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.198 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.198 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.199 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.214 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f25e4d0c430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.216 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f25e4d0c430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.217 237053 INFO nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.224 237053 INFO nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host capabilities <capabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <host>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <uuid>a9d7d544-72dd-4b08-9e5e-495057bde287</uuid>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <arch>x86_64</arch>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model>EPYC-Rome-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <vendor>AMD</vendor>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <microcode version='16777317'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <signature family='23' model='49' stepping='0'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='x2apic'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='tsc-deadline'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='osxsave'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='hypervisor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='tsc_adjust'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='spec-ctrl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='stibp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='arch-capabilities'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='cmp_legacy'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='topoext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='virt-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='lbrv'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='tsc-scale'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='vmcb-clean'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='pause-filter'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='pfthreshold'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='svme-addr-chk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='rdctl-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='skip-l1dfl-vmentry'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='mds-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature name='pschange-mc-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <pages unit='KiB' size='4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <pages unit='KiB' size='2048'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <pages unit='KiB' size='1048576'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <power_management>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <suspend_mem/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </power_management>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <iommu support='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <migration_features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <live/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <uri_transports>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <uri_transport>tcp</uri_transport>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <uri_transport>rdma</uri_transport>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </uri_transports>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </migration_features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <topology>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <cells num='1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <cell id='0'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          <memory unit='KiB'>7864312</memory>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          <pages unit='KiB' size='2048'>0</pages>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          <distances>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <sibling id='0' value='10'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          </distances>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          <cpus num='8'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:          </cpus>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        </cell>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </cells>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </topology>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <cache>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </cache>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <secmodel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model>selinux</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <doi>0</doi>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </secmodel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <secmodel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model>dac</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <doi>0</doi>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </secmodel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </host>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <guest>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <os_type>hvm</os_type>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <arch name='i686'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <wordsize>32</wordsize>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <domain type='qemu'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <domain type='kvm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </arch>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <pae/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <nonpae/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <acpi default='on' toggle='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <apic default='on' toggle='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <cpuselection/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <deviceboot/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <disksnapshot default='on' toggle='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <externalSnapshot/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </guest>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <guest>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <os_type>hvm</os_type>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <arch name='x86_64'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <wordsize>64</wordsize>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <domain type='qemu'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <domain type='kvm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </arch>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <acpi default='on' toggle='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <apic default='on' toggle='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <cpuselection/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <deviceboot/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <disksnapshot default='on' toggle='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <externalSnapshot/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </guest>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 
Jan 10 12:15:49 np0005580781 nova_compute[237049]: </capabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: #033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.233 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.242 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 10 12:15:49 np0005580781 nova_compute[237049]: <domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <path>/usr/libexec/qemu-kvm</path>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <domain>kvm</domain>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <arch>i686</arch>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <vcpu max='4096'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <iothreads supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <os supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='firmware'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <loader supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>rom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pflash</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='readonly'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>yes</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='secure'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </loader>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </os>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-passthrough' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='hostPassthroughMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='maximum' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='maximumMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-model' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <vendor>AMD</vendor>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='x2apic'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-deadline'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='hypervisor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc_adjust'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='spec-ctrl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='stibp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='cmp_legacy'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='overflow-recov'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='succor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='amd-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='virt-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lbrv'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-scale'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='vmcb-clean'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='flushbyasid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pause-filter'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pfthreshold'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='svme-addr-chk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='disable' name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='custom' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Dhyana-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-128'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-256'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-512'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v6'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v7'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <memoryBacking supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='sourceType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>anonymous</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>memfd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </memoryBacking>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <disk supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='diskDevice'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>disk</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cdrom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>floppy</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>lun</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>fdc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>sata</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <graphics supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vnc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egl-headless</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </graphics>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <video supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='modelType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vga</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cirrus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>none</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>bochs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ramfb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </video>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hostdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='mode'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>subsystem</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='startupPolicy'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>mandatory</value>
Jan 10 12:15:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>requisite</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>optional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='subsysType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pci</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='capsType'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='pciBackend'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hostdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <rng supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>random</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </rng>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <filesystem supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='driverType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>path</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>handle</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtiofs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </filesystem>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <tpm supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-tis</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-crb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emulator</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>external</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendVersion'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>2.0</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </tpm>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <redirdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </redirdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <channel supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </channel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <crypto supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </crypto>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <interface supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>passt</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </interface>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <panic supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>isa</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>hyperv</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </panic>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <console supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>null</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dev</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pipe</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stdio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>udp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tcp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu-vdagent</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </console>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <gic supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <vmcoreinfo supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <genid supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backingStoreInput supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backup supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <async-teardown supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <ps2 supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sev supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sgx supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hyperv supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='features'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>relaxed</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vapic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>spinlocks</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vpindex</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>runtime</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>synic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stimer</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reset</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vendor_id</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>frequencies</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reenlightenment</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tlbflush</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ipi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>avic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emsr_bitmap</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>xmm_input</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <spinlocks>4095</spinlocks>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <stimer_direct>on</stimer_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_direct>on</tlbflush_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_extended>on</tlbflush_extended>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hyperv>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <launchSecurity supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='sectype'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tdx</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </launchSecurity>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: </domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.246 237053 WARNING nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.247 237053 DEBUG nova.virt.libvirt.volume.mount [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.252 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 10 12:15:49 np0005580781 nova_compute[237049]: <domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <path>/usr/libexec/qemu-kvm</path>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <domain>kvm</domain>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <arch>i686</arch>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <vcpu max='240'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <iothreads supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <os supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='firmware'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <loader supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>rom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pflash</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='readonly'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>yes</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='secure'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </loader>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </os>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-passthrough' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='hostPassthroughMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='maximum' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='maximumMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-model' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <vendor>AMD</vendor>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='x2apic'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-deadline'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='hypervisor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc_adjust'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='spec-ctrl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='stibp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='cmp_legacy'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='overflow-recov'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='succor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='amd-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='virt-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lbrv'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-scale'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='vmcb-clean'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='flushbyasid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pause-filter'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pfthreshold'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='svme-addr-chk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='disable' name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='custom' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Dhyana-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-128'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-256'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-512'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v6'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v7'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <memoryBacking supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='sourceType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>anonymous</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>memfd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </memoryBacking>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <disk supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='diskDevice'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>disk</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cdrom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>floppy</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>lun</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ide</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>fdc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>sata</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <graphics supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vnc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egl-headless</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </graphics>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <video supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='modelType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vga</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cirrus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>none</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>bochs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ramfb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </video>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hostdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='mode'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>subsystem</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='startupPolicy'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>mandatory</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>requisite</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>optional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='subsysType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pci</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='capsType'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='pciBackend'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hostdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <rng supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>random</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </rng>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <filesystem supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='driverType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>path</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>handle</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtiofs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </filesystem>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <tpm supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-tis</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-crb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emulator</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>external</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendVersion'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>2.0</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </tpm>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <redirdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </redirdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <channel supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </channel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <crypto supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </crypto>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <interface supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>passt</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </interface>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <panic supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>isa</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>hyperv</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </panic>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <console supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>null</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dev</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pipe</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stdio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>udp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tcp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu-vdagent</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </console>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <gic supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <vmcoreinfo supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <genid supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backingStoreInput supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backup supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <async-teardown supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <ps2 supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sev supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sgx supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hyperv supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='features'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>relaxed</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vapic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>spinlocks</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vpindex</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>runtime</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>synic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stimer</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reset</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vendor_id</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>frequencies</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reenlightenment</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tlbflush</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ipi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>avic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emsr_bitmap</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>xmm_input</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <spinlocks>4095</spinlocks>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <stimer_direct>on</stimer_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_direct>on</tlbflush_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_extended>on</tlbflush_extended>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hyperv>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <launchSecurity supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='sectype'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tdx</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </launchSecurity>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: </domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.291 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.296 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 10 12:15:49 np0005580781 nova_compute[237049]: <domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <path>/usr/libexec/qemu-kvm</path>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <domain>kvm</domain>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <arch>x86_64</arch>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <vcpu max='4096'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <iothreads supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <os supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='firmware'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>efi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <loader supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>rom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pflash</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='readonly'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>yes</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='secure'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>yes</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </loader>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </os>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-passthrough' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='hostPassthroughMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='maximum' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='maximumMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-model' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <vendor>AMD</vendor>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='x2apic'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-deadline'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='hypervisor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc_adjust'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='spec-ctrl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='stibp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='cmp_legacy'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='overflow-recov'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='succor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='amd-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='virt-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lbrv'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-scale'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='vmcb-clean'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='flushbyasid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pause-filter'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pfthreshold'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='svme-addr-chk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='disable' name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='custom' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Dhyana-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-128'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-256'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-512'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v6'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v7'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <memoryBacking supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='sourceType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>anonymous</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>memfd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </memoryBacking>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <disk supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='diskDevice'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>disk</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cdrom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>floppy</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>lun</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>fdc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>sata</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <graphics supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vnc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egl-headless</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </graphics>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <video supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='modelType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vga</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cirrus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>none</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>bochs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ramfb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </video>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hostdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='mode'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>subsystem</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='startupPolicy'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>mandatory</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>requisite</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>optional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='subsysType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pci</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='capsType'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='pciBackend'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hostdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <rng supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>random</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </rng>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <filesystem supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='driverType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>path</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>handle</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtiofs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </filesystem>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <tpm supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-tis</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-crb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emulator</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>external</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendVersion'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>2.0</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </tpm>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <redirdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </redirdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <channel supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </channel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <crypto supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </crypto>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <interface supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>passt</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </interface>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <panic supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>isa</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>hyperv</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </panic>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <console supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>null</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dev</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pipe</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stdio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>udp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tcp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu-vdagent</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </console>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <gic supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <vmcoreinfo supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <genid supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backingStoreInput supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backup supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <async-teardown supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <ps2 supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sev supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sgx supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hyperv supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='features'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>relaxed</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vapic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>spinlocks</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vpindex</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>runtime</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>synic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stimer</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reset</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vendor_id</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>frequencies</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reenlightenment</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tlbflush</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ipi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>avic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emsr_bitmap</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>xmm_input</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <spinlocks>4095</spinlocks>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <stimer_direct>on</stimer_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_direct>on</tlbflush_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_extended>on</tlbflush_extended>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hyperv>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <launchSecurity supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='sectype'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tdx</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </launchSecurity>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: </domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.353 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 10 12:15:49 np0005580781 nova_compute[237049]: <domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <path>/usr/libexec/qemu-kvm</path>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <domain>kvm</domain>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <arch>x86_64</arch>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <vcpu max='240'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <iothreads supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <os supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='firmware'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <loader supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>rom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pflash</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='readonly'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>yes</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='secure'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>no</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </loader>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </os>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-passthrough' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='hostPassthroughMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='maximum' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='maximumMigratable'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>on</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>off</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='host-model' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <vendor>AMD</vendor>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='x2apic'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-deadline'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='hypervisor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc_adjust'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='spec-ctrl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='stibp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='cmp_legacy'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='overflow-recov'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='succor'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='amd-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='virt-ssbd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lbrv'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='tsc-scale'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='vmcb-clean'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='flushbyasid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pause-filter'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='pfthreshold'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='svme-addr-chk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <feature policy='disable' name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <mode name='custom' supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Broadwell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cascadelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Cooperlake-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Denverton-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Dhyana-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Genoa-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='auto-ibrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Milan-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amd-psfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='no-nested-data-bp'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='null-sel-clr-base'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='stibp-always-on'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-Rome-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='EPYC-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='GraniteRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-128'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-256'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx10-512'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='prefetchiti'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Haswell-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-noTSX'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v6'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Icelake-Server-v7'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='IvyBridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='KnightsMill-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4fmaps'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-4vnniw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512er'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512pf'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G4-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Opteron_G5-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fma4'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tbm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xop'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SapphireRapids-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='amx-tile'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-bf16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-fp16'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512-vpopcntdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bitalg'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vbmi2'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrc'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fzrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='la57'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='taa-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='tsx-ldtrk'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xfd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='SierraForest-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ifma'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-ne-convert'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx-vnni-int8'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='bus-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cmpccxadd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fbsdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='fsrs'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ibrs-all'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mcdt-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pbrsb-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='psdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='sbdr-ssdp-no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='serialize'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vaes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='vpclmulqdq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Client-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='hle'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='rtm'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Skylake-Server-v5'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512bw'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512cd'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512dq'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512f'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='avx512vl'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='invpcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pcid'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='pku'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='mpx'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v2'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v3'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='core-capability'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='split-lock-detect'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='Snowridge-v4'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='cldemote'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='erms'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='gfni'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdir64b'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='movdiri'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='xsaves'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='athlon-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='core2duo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='coreduo-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='n270-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='ss'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <blockers model='phenom-v1'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnow'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <feature name='3dnowext'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </blockers>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </mode>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </cpu>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <memoryBacking supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <enum name='sourceType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>anonymous</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <value>memfd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </memoryBacking>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <disk supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='diskDevice'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>disk</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cdrom</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>floppy</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>lun</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ide</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>fdc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>sata</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <graphics supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vnc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egl-headless</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </graphics>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <video supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='modelType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vga</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>cirrus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>none</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>bochs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ramfb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </video>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hostdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='mode'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>subsystem</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='startupPolicy'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>mandatory</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>requisite</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>optional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='subsysType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pci</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>scsi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='capsType'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='pciBackend'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hostdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <rng supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtio-non-transitional</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>random</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>egd</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </rng>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <filesystem supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='driverType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>path</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>handle</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>virtiofs</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </filesystem>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <tpm supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-tis</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tpm-crb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emulator</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>external</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendVersion'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>2.0</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </tpm>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <redirdev supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='bus'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>usb</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </redirdev>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <channel supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </channel>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <crypto supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendModel'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>builtin</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </crypto>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <interface supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='backendType'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>default</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>passt</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </interface>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <panic supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='model'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>isa</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>hyperv</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </panic>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <console supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='type'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>null</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vc</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pty</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dev</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>file</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>pipe</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stdio</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>udp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tcp</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>unix</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>qemu-vdagent</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>dbus</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </console>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </devices>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  <features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <gic supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <vmcoreinfo supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <genid supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backingStoreInput supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <backup supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <async-teardown supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <ps2 supported='yes'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sev supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <sgx supported='no'/>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <hyperv supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='features'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>relaxed</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vapic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>spinlocks</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vpindex</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>runtime</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>synic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>stimer</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reset</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>vendor_id</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>frequencies</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>reenlightenment</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tlbflush</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>ipi</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>avic</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>emsr_bitmap</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>xmm_input</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <spinlocks>4095</spinlocks>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <stimer_direct>on</stimer_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_direct>on</tlbflush_direct>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <tlbflush_extended>on</tlbflush_extended>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </defaults>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </hyperv>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    <launchSecurity supported='yes'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      <enum name='sectype'>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:        <value>tdx</value>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:      </enum>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:    </launchSecurity>
Jan 10 12:15:49 np0005580781 nova_compute[237049]:  </features>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: </domainCapabilities>
Jan 10 12:15:49 np0005580781 nova_compute[237049]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.451 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.451 237053 INFO nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Secure Boot support detected#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.453 237053 INFO nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.454 237053 INFO nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.462 237053 DEBUG nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.518 237053 INFO nova.virt.node [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Determined node identity 5f85855c-8a9b-43b5-ae49-f5846b9dcebe from /var/lib/nova/compute_id#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.548 237053 WARNING nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Compute nodes ['5f85855c-8a9b-43b5-ae49-f5846b9dcebe'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.584 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.618 237053 WARNING nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:15:49 np0005580781 nova_compute[237049]: 2026-01-10 17:15:49.619 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:15:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:15:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799514320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.193 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:15:50 np0005580781 systemd[1]: Starting libvirt nodedev daemon...
Jan 10 12:15:50 np0005580781 systemd[1]: Started libvirt nodedev daemon.
Jan 10 12:15:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.543 237053 WARNING nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.546 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5259MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.546 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.547 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.566 237053 WARNING nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] No compute node record for compute-0.ctlplane.example.com:5f85855c-8a9b-43b5-ae49-f5846b9dcebe: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 5f85855c-8a9b-43b5-ae49-f5846b9dcebe could not be found.#033[00m
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.585 237053 INFO nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe#033[00m
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.652 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:15:50 np0005580781 nova_compute[237049]: 2026-01-10 17:15:50.652 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:15:51 np0005580781 nova_compute[237049]: 2026-01-10 17:15:51.691 237053 INFO nova.scheduler.client.report [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [req-55c6c937-395c-49e6-b754-05c0e3db2256] Created resource provider record via placement API for resource provider with UUID 5f85855c-8a9b-43b5-ae49-f5846b9dcebe and name compute-0.ctlplane.example.com.#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.046 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:15:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:52 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:15:52 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294518686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.660 237053 DEBUG oslo_concurrency.processutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.667 237053 DEBUG nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 10 12:15:52 np0005580781 nova_compute[237049]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.667 237053 INFO nova.virt.libvirt.host [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.668 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.668 237053 DEBUG nova.virt.libvirt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.748 237053 DEBUG nova.scheduler.client.report [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updated inventory for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.750 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.751 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.869 237053 DEBUG nova.compute.provider_tree [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Updating resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.895 237053 DEBUG nova.compute.resource_tracker [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.896 237053 DEBUG oslo_concurrency.lockutils [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.897 237053 DEBUG nova.service [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.989 237053 DEBUG nova.service [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 10 12:15:52 np0005580781 nova_compute[237049]: 2026-01-10 17:15:52.990 237053 DEBUG nova.servicegroup.drivers.db [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 10 12:15:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:15:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:57 np0005580781 podman[237414]: 2026-01-10 17:15:57.076108986 +0000 UTC m=+0.078478325 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 10 12:15:57 np0005580781 podman[237415]: 2026-01-10 17:15:57.128089573 +0000 UTC m=+0.116772981 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 10 12:15:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:15:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:04 np0005580781 podman[237603]: 2026-01-10 17:16:04.608330651 +0000 UTC m=+0.047166914 container create 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:16:04 np0005580781 systemd[1]: Started libpod-conmon-8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3.scope.
Jan 10 12:16:04 np0005580781 podman[237603]: 2026-01-10 17:16:04.586631287 +0000 UTC m=+0.025467550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:16:04 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:16:04 np0005580781 podman[237603]: 2026-01-10 17:16:04.709008734 +0000 UTC m=+0.147845047 container init 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:16:04 np0005580781 podman[237603]: 2026-01-10 17:16:04.721178063 +0000 UTC m=+0.160014306 container start 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 10 12:16:04 np0005580781 podman[237603]: 2026-01-10 17:16:04.725224655 +0000 UTC m=+0.164060898 container attach 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 12:16:04 np0005580781 flamboyant_hoover[237619]: 167 167
Jan 10 12:16:04 np0005580781 systemd[1]: libpod-8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3.scope: Deactivated successfully.
Jan 10 12:16:04 np0005580781 podman[237603]: 2026-01-10 17:16:04.732649502 +0000 UTC m=+0.171485755 container died 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 12:16:04 np0005580781 systemd[1]: var-lib-containers-storage-overlay-630e0f6efe8c12391f15fc5a7aa8a056b03c57cb7fc09c1a68b84da0fe2c4752-merged.mount: Deactivated successfully.
Jan 10 12:16:04 np0005580781 podman[237603]: 2026-01-10 17:16:04.776507403 +0000 UTC m=+0.215343636 container remove 8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 12:16:04 np0005580781 systemd[1]: libpod-conmon-8137eaabc56f35c4e06ed3ed012016152697e678325ac60f7182e17027d12dd3.scope: Deactivated successfully.
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:16:04 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:16:04 np0005580781 podman[237643]: 2026-01-10 17:16:04.983421223 +0000 UTC m=+0.062515361 container create 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 12:16:05 np0005580781 systemd[1]: Started libpod-conmon-35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0.scope.
Jan 10 12:16:05 np0005580781 podman[237643]: 2026-01-10 17:16:04.962210472 +0000 UTC m=+0.041304650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:16:05 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:16:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:05 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:05 np0005580781 podman[237643]: 2026-01-10 17:16:05.081694358 +0000 UTC m=+0.160788576 container init 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:16:05 np0005580781 podman[237643]: 2026-01-10 17:16:05.0943393 +0000 UTC m=+0.173433468 container start 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:16:05 np0005580781 podman[237643]: 2026-01-10 17:16:05.099922255 +0000 UTC m=+0.179016413 container attach 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 12:16:05 np0005580781 vigilant_curie[237659]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:16:05 np0005580781 vigilant_curie[237659]: --> All data devices are unavailable
Jan 10 12:16:05 np0005580781 systemd[1]: libpod-35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0.scope: Deactivated successfully.
Jan 10 12:16:05 np0005580781 podman[237643]: 2026-01-10 17:16:05.643542968 +0000 UTC m=+0.722637106 container died 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:16:05 np0005580781 systemd[1]: var-lib-containers-storage-overlay-45ee30863db5cc008df2fb5e393c7666ff74218f109524494a709b045cc6ed29-merged.mount: Deactivated successfully.
Jan 10 12:16:05 np0005580781 podman[237643]: 2026-01-10 17:16:05.711735636 +0000 UTC m=+0.790829794 container remove 35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:16:05 np0005580781 systemd[1]: libpod-conmon-35e194c4e956f4aa12a2c75473ce78f6db365cfab451fa2eb12b33fd98c374b0.scope: Deactivated successfully.
Jan 10 12:16:06 np0005580781 podman[237754]: 2026-01-10 17:16:06.253555949 +0000 UTC m=+0.060920637 container create de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:16:06 np0005580781 systemd[1]: Started libpod-conmon-de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba.scope.
Jan 10 12:16:06 np0005580781 podman[237754]: 2026-01-10 17:16:06.227905845 +0000 UTC m=+0.035270623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:16:06 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:16:06 np0005580781 podman[237754]: 2026-01-10 17:16:06.356006601 +0000 UTC m=+0.163371319 container init de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 12:16:06 np0005580781 podman[237754]: 2026-01-10 17:16:06.367311986 +0000 UTC m=+0.174676674 container start de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:16:06 np0005580781 podman[237754]: 2026-01-10 17:16:06.370859404 +0000 UTC m=+0.178224092 container attach de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:16:06 np0005580781 epic_cori[237770]: 167 167
Jan 10 12:16:06 np0005580781 systemd[1]: libpod-de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba.scope: Deactivated successfully.
Jan 10 12:16:06 np0005580781 podman[237754]: 2026-01-10 17:16:06.373026465 +0000 UTC m=+0.180391193 container died de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:16:06 np0005580781 systemd[1]: var-lib-containers-storage-overlay-847e363c224eebeb0a57b1e0aca80da1a789424b194de48e0876ef672bc516b5-merged.mount: Deactivated successfully.
Jan 10 12:16:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:06 np0005580781 podman[237754]: 2026-01-10 17:16:06.421948237 +0000 UTC m=+0.229312965 container remove de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cori, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:16:06 np0005580781 systemd[1]: libpod-conmon-de7531dc863af961f085c2ca1624cfde98b354dc6383f51f5f40e45cfb13e6ba.scope: Deactivated successfully.
Jan 10 12:16:06 np0005580781 podman[237794]: 2026-01-10 17:16:06.635580554 +0000 UTC m=+0.054996592 container create 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:16:06 np0005580781 systemd[1]: Started libpod-conmon-2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71.scope.
Jan 10 12:16:06 np0005580781 podman[237794]: 2026-01-10 17:16:06.610037332 +0000 UTC m=+0.029453430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:16:06 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:16:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:06 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:06 np0005580781 podman[237794]: 2026-01-10 17:16:06.738163649 +0000 UTC m=+0.157579677 container init 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 12:16:06 np0005580781 podman[237794]: 2026-01-10 17:16:06.749677 +0000 UTC m=+0.169093028 container start 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:16:06 np0005580781 podman[237794]: 2026-01-10 17:16:06.75580267 +0000 UTC m=+0.175218668 container attach 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]: {
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:    "0": [
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:        {
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "devices": [
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "/dev/loop3"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            ],
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_name": "ceph_lv0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_size": "21470642176",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "name": "ceph_lv0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "tags": {
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cluster_name": "ceph",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.crush_device_class": "",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.encrypted": "0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.objectstore": "bluestore",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osd_id": "0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.type": "block",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.vdo": "0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.with_tpm": "0"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            },
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "type": "block",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "vg_name": "ceph_vg0"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:        }
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:    ],
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:    "1": [
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:        {
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "devices": [
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "/dev/loop4"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            ],
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_name": "ceph_lv1",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_size": "21470642176",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "name": "ceph_lv1",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "tags": {
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cluster_name": "ceph",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.crush_device_class": "",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.encrypted": "0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.objectstore": "bluestore",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osd_id": "1",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.type": "block",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.vdo": "0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.with_tpm": "0"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            },
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "type": "block",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "vg_name": "ceph_vg1"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:        }
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:    ],
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:    "2": [
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:        {
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "devices": [
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "/dev/loop5"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            ],
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_name": "ceph_lv2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_size": "21470642176",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "name": "ceph_lv2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "tags": {
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.cluster_name": "ceph",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.crush_device_class": "",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.encrypted": "0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.objectstore": "bluestore",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osd_id": "2",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.type": "block",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.vdo": "0",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:                "ceph.with_tpm": "0"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            },
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "type": "block",
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:            "vg_name": "ceph_vg2"
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:        }
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]:    ]
Jan 10 12:16:07 np0005580781 exciting_visvesvaraya[237811]: }
Jan 10 12:16:07 np0005580781 systemd[1]: libpod-2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71.scope: Deactivated successfully.
Jan 10 12:16:07 np0005580781 podman[237794]: 2026-01-10 17:16:07.082863815 +0000 UTC m=+0.502279853 container died 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:16:07 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c1f67a9c500e5e93ff6e1d07a23cdf5e81744ce6bd273fdc2aa636698d27cb8b-merged.mount: Deactivated successfully.
Jan 10 12:16:07 np0005580781 podman[237794]: 2026-01-10 17:16:07.138767161 +0000 UTC m=+0.558183159 container remove 2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:16:07 np0005580781 systemd[1]: libpod-conmon-2478852f914982c17b0306501fba3394e42615a5009c35c8eb867a3938d1dd71.scope: Deactivated successfully.
Jan 10 12:16:07 np0005580781 podman[237896]: 2026-01-10 17:16:07.65063971 +0000 UTC m=+0.051692050 container create d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 12:16:07 np0005580781 systemd[1]: Started libpod-conmon-d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7.scope.
Jan 10 12:16:07 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:16:07 np0005580781 podman[237896]: 2026-01-10 17:16:07.621602112 +0000 UTC m=+0.022654512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:16:07 np0005580781 podman[237896]: 2026-01-10 17:16:07.733055404 +0000 UTC m=+0.134107744 container init d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:16:07 np0005580781 podman[237896]: 2026-01-10 17:16:07.744951745 +0000 UTC m=+0.146004095 container start d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 10 12:16:07 np0005580781 busy_mccarthy[237912]: 167 167
Jan 10 12:16:07 np0005580781 podman[237896]: 2026-01-10 17:16:07.749155872 +0000 UTC m=+0.150208222 container attach d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 12:16:07 np0005580781 systemd[1]: libpod-d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7.scope: Deactivated successfully.
Jan 10 12:16:07 np0005580781 podman[237917]: 2026-01-10 17:16:07.814490251 +0000 UTC m=+0.040274092 container died d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:16:07 np0005580781 systemd[1]: var-lib-containers-storage-overlay-78ea7d79bf41ee1807e4af9d79d2e45ce8b966ad38a9a2a5bbd26bbbc3eb4510-merged.mount: Deactivated successfully.
Jan 10 12:16:07 np0005580781 podman[237917]: 2026-01-10 17:16:07.864012319 +0000 UTC m=+0.089796110 container remove d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:16:07 np0005580781 systemd[1]: libpod-conmon-d4aebef4ba49a1071920034bf460d9bf04f94ac0ca915264122712df9d06f3d7.scope: Deactivated successfully.
Jan 10 12:16:08 np0005580781 podman[237939]: 2026-01-10 17:16:08.074066057 +0000 UTC m=+0.054532689 container create 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:16:08 np0005580781 systemd[1]: Started libpod-conmon-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope.
Jan 10 12:16:08 np0005580781 podman[237939]: 2026-01-10 17:16:08.049973896 +0000 UTC m=+0.030440578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:16:08 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:16:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:08 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:16:08 np0005580781 podman[237939]: 2026-01-10 17:16:08.169561765 +0000 UTC m=+0.150028417 container init 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 10 12:16:08 np0005580781 podman[237939]: 2026-01-10 17:16:08.183060471 +0000 UTC m=+0.163527133 container start 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:16:08 np0005580781 podman[237939]: 2026-01-10 17:16:08.187242027 +0000 UTC m=+0.167708719 container attach 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 12:16:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:08 np0005580781 lvm[238037]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:16:08 np0005580781 lvm[238037]: VG ceph_vg1 finished
Jan 10 12:16:08 np0005580781 lvm[238033]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:16:08 np0005580781 lvm[238033]: VG ceph_vg0 finished
Jan 10 12:16:08 np0005580781 lvm[238038]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:16:08 np0005580781 lvm[238038]: VG ceph_vg2 finished
Jan 10 12:16:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:16:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:16:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:16:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:16:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:16:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:16:09 np0005580781 nostalgic_faraday[237955]: {}
Jan 10 12:16:09 np0005580781 systemd[1]: libpod-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope: Deactivated successfully.
Jan 10 12:16:09 np0005580781 systemd[1]: libpod-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope: Consumed 1.403s CPU time.
Jan 10 12:16:09 np0005580781 podman[237939]: 2026-01-10 17:16:09.056419213 +0000 UTC m=+1.036885845 container died 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 12:16:09 np0005580781 systemd[1]: var-lib-containers-storage-overlay-63955a5c0b425f422290a8b352a018f6052f18511bcea0a3eac1536b452ce5cf-merged.mount: Deactivated successfully.
Jan 10 12:16:09 np0005580781 podman[237939]: 2026-01-10 17:16:09.108349708 +0000 UTC m=+1.088816340 container remove 5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_faraday, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:16:09 np0005580781 systemd[1]: libpod-conmon-5708b6f7c4d6d6724210c016025d0cdcd9c1995ad1d48b37ed25ee57d83e4672.scope: Deactivated successfully.
Jan 10 12:16:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:16:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:16:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:16:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:16:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:09 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:16:09 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:16:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688670713' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688670713' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3710747635' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3710747635' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945315605' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:16:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945315605' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:16:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:12 np0005580781 nova_compute[237049]: 2026-01-10 17:16:12.993 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:13 np0005580781 nova_compute[237049]: 2026-01-10 17:16:13.018 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:28 np0005580781 podman[238080]: 2026-01-10 17:16:28.065777387 +0000 UTC m=+0.061468972 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:16:28 np0005580781 podman[238081]: 2026-01-10 17:16:28.117547898 +0000 UTC m=+0.112299727 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 10 12:16:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:16:38
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'images']
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:16:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:16:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:16:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:16:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:16:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:16:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.348 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.360 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.360 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.361 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.361 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.362 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.362 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.362 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.363 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.363 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.388 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.389 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.390 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.390 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:16:48 np0005580781 nova_compute[237049]: 2026-01-10 17:16:48.391 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:16:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:16:48.918 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:16:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:16:48.918 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:16:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:16:48.919 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:16:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:16:48 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750219385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.003 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.206 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.208 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5297MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.209 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.209 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:16:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.455 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.456 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:16:49 np0005580781 nova_compute[237049]: 2026-01-10 17:16:49.478 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:16:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:16:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022571183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:16:50 np0005580781 nova_compute[237049]: 2026-01-10 17:16:50.014 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:16:50 np0005580781 nova_compute[237049]: 2026-01-10 17:16:50.021 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:16:50 np0005580781 nova_compute[237049]: 2026-01-10 17:16:50.058 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:16:50 np0005580781 nova_compute[237049]: 2026-01-10 17:16:50.082 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:16:50 np0005580781 nova_compute[237049]: 2026-01-10 17:16:50.082 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:16:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:16:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 10 12:16:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/272340665' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 10 12:16:54 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 10 12:16:54 np0005580781 ceph-mgr[75538]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 12:16:54 np0005580781 ceph-mgr[75538]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 12:16:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:16:59 np0005580781 podman[238166]: 2026-01-10 17:16:59.044571077 +0000 UTC m=+0.051886607 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 10 12:16:59 np0005580781 podman[238167]: 2026-01-10 17:16:59.083655856 +0000 UTC m=+0.090979886 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 12:16:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:17:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:17:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:17:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:17:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:17:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:17:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:17:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:17:10 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:17:10 np0005580781 podman[238351]: 2026-01-10 17:17:10.705214834 +0000 UTC m=+0.048055198 container create 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:17:10 np0005580781 systemd[1]: Started libpod-conmon-209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06.scope.
Jan 10 12:17:10 np0005580781 podman[238351]: 2026-01-10 17:17:10.683934401 +0000 UTC m=+0.026774795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:17:10 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:17:10 np0005580781 podman[238351]: 2026-01-10 17:17:10.818761696 +0000 UTC m=+0.161602150 container init 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:17:10 np0005580781 podman[238351]: 2026-01-10 17:17:10.832324912 +0000 UTC m=+0.175165306 container start 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 12:17:10 np0005580781 podman[238351]: 2026-01-10 17:17:10.836812277 +0000 UTC m=+0.179652681 container attach 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 10 12:17:10 np0005580781 upbeat_turing[238367]: 167 167
Jan 10 12:17:10 np0005580781 systemd[1]: libpod-209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06.scope: Deactivated successfully.
Jan 10 12:17:10 np0005580781 podman[238372]: 2026-01-10 17:17:10.904898706 +0000 UTC m=+0.044127979 container died 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 12:17:10 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6989a628d69c015454a4c162057b998022ddbda44ad4678de23724c5e83f18a4-merged.mount: Deactivated successfully.
Jan 10 12:17:10 np0005580781 podman[238372]: 2026-01-10 17:17:10.964586572 +0000 UTC m=+0.103815814 container remove 209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 10 12:17:10 np0005580781 systemd[1]: libpod-conmon-209cfdccd136a24e22ba8e6e46733ae71bcca6a4495f17d43417981e44698a06.scope: Deactivated successfully.
Jan 10 12:17:11 np0005580781 podman[238394]: 2026-01-10 17:17:11.208927625 +0000 UTC m=+0.047751661 container create ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 12:17:11 np0005580781 systemd[1]: Started libpod-conmon-ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9.scope.
Jan 10 12:17:11 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:17:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:11 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:11 np0005580781 podman[238394]: 2026-01-10 17:17:11.188464842 +0000 UTC m=+0.027288898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:17:11 np0005580781 podman[238394]: 2026-01-10 17:17:11.297215651 +0000 UTC m=+0.136039727 container init ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 10 12:17:11 np0005580781 podman[238394]: 2026-01-10 17:17:11.312342147 +0000 UTC m=+0.151166193 container start ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:17:11 np0005580781 podman[238394]: 2026-01-10 17:17:11.316501963 +0000 UTC m=+0.155326009 container attach ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:17:11 np0005580781 amazing_ellis[238410]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:17:11 np0005580781 amazing_ellis[238410]: --> All data devices are unavailable
Jan 10 12:17:11 np0005580781 systemd[1]: libpod-ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9.scope: Deactivated successfully.
Jan 10 12:17:11 np0005580781 podman[238394]: 2026-01-10 17:17:11.913046716 +0000 UTC m=+0.751870782 container died ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:17:11 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2a4bf1f920ea07e71acdfe1351337843efa0ce201d117b5ddf4dd3bd33bf5581-merged.mount: Deactivated successfully.
Jan 10 12:17:11 np0005580781 podman[238394]: 2026-01-10 17:17:11.966391089 +0000 UTC m=+0.805215125 container remove ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ellis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:17:11 np0005580781 systemd[1]: libpod-conmon-ee9ba58d06f1a849a9d2a1cedd4ddc92a318e127a252b388ba0e193f979b4db9.scope: Deactivated successfully.
Jan 10 12:17:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:12 np0005580781 podman[238504]: 2026-01-10 17:17:12.468037136 +0000 UTC m=+0.049633239 container create e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 10 12:17:12 np0005580781 systemd[1]: Started libpod-conmon-e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41.scope.
Jan 10 12:17:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:17:12 np0005580781 podman[238504]: 2026-01-10 17:17:12.446216369 +0000 UTC m=+0.027812472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:17:12 np0005580781 podman[238504]: 2026-01-10 17:17:12.542390486 +0000 UTC m=+0.123986609 container init e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 10 12:17:12 np0005580781 podman[238504]: 2026-01-10 17:17:12.550864142 +0000 UTC m=+0.132460205 container start e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 12:17:12 np0005580781 busy_murdock[238520]: 167 167
Jan 10 12:17:12 np0005580781 systemd[1]: libpod-e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41.scope: Deactivated successfully.
Jan 10 12:17:12 np0005580781 podman[238504]: 2026-01-10 17:17:12.554577767 +0000 UTC m=+0.136173850 container attach e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 12:17:12 np0005580781 podman[238504]: 2026-01-10 17:17:12.5566484 +0000 UTC m=+0.138244463 container died e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 12:17:12 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1bbe8e70c5e4bc7a6b3405a16233b39006cdfe2084f59ca9a98070bf0eb04b87-merged.mount: Deactivated successfully.
Jan 10 12:17:12 np0005580781 podman[238504]: 2026-01-10 17:17:12.601564878 +0000 UTC m=+0.183160981 container remove e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_murdock, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 10 12:17:12 np0005580781 systemd[1]: libpod-conmon-e8aa62979a66c05898b00a4278488e851eec52d56cd31d04f126f0727e907a41.scope: Deactivated successfully.
Jan 10 12:17:12 np0005580781 podman[238543]: 2026-01-10 17:17:12.868196771 +0000 UTC m=+0.076915947 container create 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:17:12 np0005580781 systemd[1]: Started libpod-conmon-6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6.scope.
Jan 10 12:17:12 np0005580781 podman[238543]: 2026-01-10 17:17:12.836683105 +0000 UTC m=+0.045402361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:17:12 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:17:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:12 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:12 np0005580781 podman[238543]: 2026-01-10 17:17:12.96915359 +0000 UTC m=+0.177872776 container init 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:17:12 np0005580781 podman[238543]: 2026-01-10 17:17:12.978855508 +0000 UTC m=+0.187574674 container start 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:17:12 np0005580781 podman[238543]: 2026-01-10 17:17:12.981773003 +0000 UTC m=+0.190492249 container attach 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]: {
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:    "0": [
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:        {
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "devices": [
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "/dev/loop3"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            ],
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_name": "ceph_lv0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_size": "21470642176",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "name": "ceph_lv0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "tags": {
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cluster_name": "ceph",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.crush_device_class": "",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.encrypted": "0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.objectstore": "bluestore",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osd_id": "0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.type": "block",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.vdo": "0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.with_tpm": "0"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            },
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "type": "block",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "vg_name": "ceph_vg0"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:        }
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:    ],
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:    "1": [
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:        {
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "devices": [
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "/dev/loop4"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            ],
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_name": "ceph_lv1",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_size": "21470642176",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "name": "ceph_lv1",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "tags": {
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cluster_name": "ceph",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.crush_device_class": "",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.encrypted": "0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.objectstore": "bluestore",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osd_id": "1",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.type": "block",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.vdo": "0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.with_tpm": "0"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            },
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "type": "block",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "vg_name": "ceph_vg1"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:        }
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:    ],
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:    "2": [
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:        {
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "devices": [
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "/dev/loop5"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            ],
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_name": "ceph_lv2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_size": "21470642176",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "name": "ceph_lv2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "tags": {
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.cluster_name": "ceph",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.crush_device_class": "",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.encrypted": "0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.objectstore": "bluestore",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osd_id": "2",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.type": "block",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.vdo": "0",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:                "ceph.with_tpm": "0"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            },
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "type": "block",
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:            "vg_name": "ceph_vg2"
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:        }
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]:    ]
Jan 10 12:17:13 np0005580781 strange_driscoll[238560]: }
Jan 10 12:17:13 np0005580781 systemd[1]: libpod-6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6.scope: Deactivated successfully.
Jan 10 12:17:13 np0005580781 podman[238543]: 2026-01-10 17:17:13.347807184 +0000 UTC m=+0.556526350 container died 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 12:17:13 np0005580781 systemd[1]: var-lib-containers-storage-overlay-5ade57e8429191a6bb9615ed64dbc51c1b1d9d968fdad1dfb98f0cfa364f7f9a-merged.mount: Deactivated successfully.
Jan 10 12:17:13 np0005580781 podman[238543]: 2026-01-10 17:17:13.389477909 +0000 UTC m=+0.598197115 container remove 6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 10 12:17:13 np0005580781 systemd[1]: libpod-conmon-6c17544efddcd5d37c7f8fbf11d11e2e396f391ef6b4b8d6849bd274abf9cab6.scope: Deactivated successfully.
Jan 10 12:17:13 np0005580781 podman[238643]: 2026-01-10 17:17:13.947430255 +0000 UTC m=+0.055855128 container create e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 12:17:13 np0005580781 systemd[1]: Started libpod-conmon-e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30.scope.
Jan 10 12:17:14 np0005580781 podman[238643]: 2026-01-10 17:17:13.921601265 +0000 UTC m=+0.030026228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:17:14 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:17:14 np0005580781 podman[238643]: 2026-01-10 17:17:14.031564795 +0000 UTC m=+0.139989688 container init e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:17:14 np0005580781 podman[238643]: 2026-01-10 17:17:14.043223033 +0000 UTC m=+0.151647946 container start e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 12:17:14 np0005580781 podman[238643]: 2026-01-10 17:17:14.047213735 +0000 UTC m=+0.155638618 container attach e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 12:17:14 np0005580781 jolly_blackburn[238659]: 167 167
Jan 10 12:17:14 np0005580781 systemd[1]: libpod-e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30.scope: Deactivated successfully.
Jan 10 12:17:14 np0005580781 podman[238643]: 2026-01-10 17:17:14.049313578 +0000 UTC m=+0.157738461 container died e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 12:17:14 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a6ce51383189aadbd701a09827537c3ee1766afc76683e5d3ef5ceb412b36e99-merged.mount: Deactivated successfully.
Jan 10 12:17:14 np0005580781 podman[238643]: 2026-01-10 17:17:14.087545555 +0000 UTC m=+0.195970438 container remove e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:17:14 np0005580781 systemd[1]: libpod-conmon-e0fb3d17b202e74052f42b9f2c030d17628f44d1d2fac89f826400a8bb970c30.scope: Deactivated successfully.
Jan 10 12:17:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:14 np0005580781 podman[238682]: 2026-01-10 17:17:14.334997088 +0000 UTC m=+0.073000466 container create ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:17:14 np0005580781 systemd[1]: Started libpod-conmon-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope.
Jan 10 12:17:14 np0005580781 podman[238682]: 2026-01-10 17:17:14.307984268 +0000 UTC m=+0.045987706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:17:14 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:17:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:14 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:17:14 np0005580781 podman[238682]: 2026-01-10 17:17:14.439547819 +0000 UTC m=+0.177551197 container init ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:17:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:14 np0005580781 podman[238682]: 2026-01-10 17:17:14.452051949 +0000 UTC m=+0.190055337 container start ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:17:14 np0005580781 podman[238682]: 2026-01-10 17:17:14.457152319 +0000 UTC m=+0.195155727 container attach ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:17:15 np0005580781 lvm[238776]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:17:15 np0005580781 lvm[238775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:17:15 np0005580781 lvm[238776]: VG ceph_vg1 finished
Jan 10 12:17:15 np0005580781 lvm[238775]: VG ceph_vg0 finished
Jan 10 12:17:15 np0005580781 lvm[238778]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:17:15 np0005580781 lvm[238778]: VG ceph_vg2 finished
Jan 10 12:17:15 np0005580781 xenodochial_ride[238697]: {}
Jan 10 12:17:15 np0005580781 systemd[1]: libpod-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope: Deactivated successfully.
Jan 10 12:17:15 np0005580781 podman[238682]: 2026-01-10 17:17:15.307229689 +0000 UTC m=+1.045233037 container died ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:17:15 np0005580781 systemd[1]: libpod-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope: Consumed 1.378s CPU time.
Jan 10 12:17:15 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6a11b2ceb313a06920e7e3904698f3d2add111e6ad713adfa5739b8840a55526-merged.mount: Deactivated successfully.
Jan 10 12:17:15 np0005580781 podman[238682]: 2026-01-10 17:17:15.348155105 +0000 UTC m=+1.086158463 container remove ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:17:15 np0005580781 systemd[1]: libpod-conmon-ad2959dc96e97275099415fc4dd41ca14e26cf3eb47220370a4a65acf018af0b.scope: Deactivated successfully.
Jan 10 12:17:15 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:17:15 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:17:15 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:17:15 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:17:15 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:17:15 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:17:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 10 12:17:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2471123660' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 10 12:17:16 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14318 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 10 12:17:16 np0005580781 ceph-mgr[75538]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 12:17:16 np0005580781 ceph-mgr[75538]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 10 12:17:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:17:18 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3105 writes, 13K keys, 3105 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s#012Cumulative WAL: 3105 writes, 3105 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1288 writes, 5596 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 5.75 MB, 0.01 MB/s#012Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     99.5      0.10              0.05         6    0.017       0      0       0.0       0.0#012  L6      1/0    4.74 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    109.1     89.2      0.28              0.16         5    0.056     16K   2269       0.0       0.0#012 Sum      1/0    4.74 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     79.8     92.0      0.39              0.21        11    0.035     16K   2269       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     84.2     85.8      0.23              0.13         6    0.038     10K   1495       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    109.1     89.2      0.28              0.16         5    0.056     16K   2269       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    103.1      0.10              0.05         5    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.8      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.010, interval 0.004#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.03 MB/s read, 0.4 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55efa2bef8d0#2 capacity: 308.00 MB usage: 1.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000226 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(99,1.29 MB,0.419983%) FilterBlock(12,55.17 KB,0.0174931%) IndexBlock(12,109.77 KB,0.0348029%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 10 12:17:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:30 np0005580781 podman[238817]: 2026-01-10 17:17:30.100057995 +0000 UTC m=+0.087510677 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 12:17:30 np0005580781 podman[238818]: 2026-01-10 17:17:30.152115685 +0000 UTC m=+0.137551376 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 10 12:17:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:17:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/799742170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:17:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:17:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/799742170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:17:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:17:38
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'images', '.mgr', 'backups']
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:17:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:17:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:17:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:17:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:17:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:17:48.918 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:17:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:17:48.919 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:17:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:17:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:17:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.069 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.069 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.099 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.099 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.100 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.117 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.119 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.119 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.119 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.160 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.161 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.161 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.162 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.163 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:17:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:50 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:17:50 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986029489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.764 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.978 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.980 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5298MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.981 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:17:50 np0005580781 nova_compute[237049]: 2026-01-10 17:17:50.981 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.072 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.073 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.107 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:17:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:17:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766907015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.690 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.697 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.715 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.718 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.718 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.945 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.946 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.946 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.947 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:17:51 np0005580781 nova_compute[237049]: 2026-01-10 17:17:51.947 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:17:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:17:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:17:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:01 np0005580781 podman[238908]: 2026-01-10 17:18:01.05449838 +0000 UTC m=+0.058202029 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 12:18:01 np0005580781 podman[238909]: 2026-01-10 17:18:01.106736174 +0000 UTC m=+0.109299502 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true)
Jan 10 12:18:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:18:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:18:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:18:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:18:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:18:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:18:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:10 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:18:10.339 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:b5:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:56:cf:00:80:b3'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 10 12:18:10 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:18:10.341 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 10 12:18:10 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:18:10.346 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 10 12:18:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:18:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:18:17 np0005580781 podman[239166]: 2026-01-10 17:18:17.559941629 +0000 UTC m=+0.068062747 container create 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:18:17 np0005580781 systemd[1]: Started libpod-conmon-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope.
Jan 10 12:18:17 np0005580781 podman[239166]: 2026-01-10 17:18:17.530152708 +0000 UTC m=+0.038273876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:18:17 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:18:17 np0005580781 podman[239166]: 2026-01-10 17:18:17.685943004 +0000 UTC m=+0.194064192 container init 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:18:17 np0005580781 podman[239166]: 2026-01-10 17:18:17.699267871 +0000 UTC m=+0.207389009 container start 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 10 12:18:17 np0005580781 podman[239166]: 2026-01-10 17:18:17.703433786 +0000 UTC m=+0.211554924 container attach 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:18:17 np0005580781 amazing_dewdney[239183]: 167 167
Jan 10 12:18:17 np0005580781 systemd[1]: libpod-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope: Deactivated successfully.
Jan 10 12:18:17 np0005580781 conmon[239183]: conmon 1d829c1195896ab16fa6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope/container/memory.events
Jan 10 12:18:17 np0005580781 podman[239166]: 2026-01-10 17:18:17.71082237 +0000 UTC m=+0.218943478 container died 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:18:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay-8fd1eec63f5cfd852851afdaf2a040cfb73b161bb9e74c04aa7d45e324949761-merged.mount: Deactivated successfully.
Jan 10 12:18:17 np0005580781 podman[239166]: 2026-01-10 17:18:17.766241788 +0000 UTC m=+0.274362906 container remove 1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:18:17 np0005580781 systemd[1]: libpod-conmon-1d829c1195896ab16fa63ad31c069c5764a02a2ca8569eba442da8495fc98f89.scope: Deactivated successfully.
Jan 10 12:18:18 np0005580781 podman[239206]: 2026-01-10 17:18:18.042406192 +0000 UTC m=+0.087179874 container create 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:18:18 np0005580781 podman[239206]: 2026-01-10 17:18:18.009569907 +0000 UTC m=+0.054343629 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:18:18 np0005580781 systemd[1]: Started libpod-conmon-91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20.scope.
Jan 10 12:18:18 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:18:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:18 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:18 np0005580781 podman[239206]: 2026-01-10 17:18:18.21277837 +0000 UTC m=+0.257552092 container init 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:18:18 np0005580781 podman[239206]: 2026-01-10 17:18:18.233668816 +0000 UTC m=+0.278442498 container start 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:18:18 np0005580781 podman[239206]: 2026-01-10 17:18:18.239116426 +0000 UTC m=+0.283890108 container attach 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:18:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:18 np0005580781 practical_noether[239223]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:18:18 np0005580781 practical_noether[239223]: --> All data devices are unavailable
Jan 10 12:18:18 np0005580781 systemd[1]: libpod-91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20.scope: Deactivated successfully.
Jan 10 12:18:18 np0005580781 podman[239206]: 2026-01-10 17:18:18.882292611 +0000 UTC m=+0.927066283 container died 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 12:18:18 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b7ff011782c128e9da4ab5569ef70c6c5e46534ba70abe4aabe66f27c19d76bd-merged.mount: Deactivated successfully.
Jan 10 12:18:18 np0005580781 podman[239206]: 2026-01-10 17:18:18.955324075 +0000 UTC m=+1.000097727 container remove 91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_noether, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 12:18:18 np0005580781 systemd[1]: libpod-conmon-91d62dd1b0d9e93e9a455b6824f2f45ef41abb943432419bff024b11d6874b20.scope: Deactivated successfully.
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.340536) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499340755, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2171, "num_deletes": 505, "total_data_size": 2157472, "memory_usage": 2201280, "flush_reason": "Manual Compaction"}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499357074, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2097174, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12153, "largest_seqno": 14323, "table_properties": {"data_size": 2087901, "index_size": 5323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 21250, "raw_average_key_size": 18, "raw_value_size": 2067248, "raw_average_value_size": 1818, "num_data_blocks": 245, "num_entries": 1137, "num_filter_entries": 1137, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065289, "oldest_key_time": 1768065289, "file_creation_time": 1768065499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16567 microseconds, and 6268 cpu microseconds.
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.357169) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2097174 bytes OK
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.357224) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.359154) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.359174) EVENT_LOG_v1 {"time_micros": 1768065499359171, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.359199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2147338, prev total WAL file size 2147338, number of live WAL files 2.
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.360334) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2048KB)], [32(4850KB)]
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499360552, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 7064059, "oldest_snapshot_seqno": -1}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3376 keys, 5626957 bytes, temperature: kUnknown
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499420431, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 5626957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5601750, "index_size": 15690, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 79870, "raw_average_key_size": 23, "raw_value_size": 5538453, "raw_average_value_size": 1640, "num_data_blocks": 678, "num_entries": 3376, "num_filter_entries": 3376, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.420921) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 5626957 bytes
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.423345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.6 rd, 93.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 4.7 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(6.1) write-amplify(2.7) OK, records in: 4399, records dropped: 1023 output_compression: NoCompression
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.423398) EVENT_LOG_v1 {"time_micros": 1768065499423362, "job": 14, "event": "compaction_finished", "compaction_time_micros": 60061, "compaction_time_cpu_micros": 32009, "output_level": 6, "num_output_files": 1, "total_output_size": 5626957, "num_input_records": 4399, "num_output_records": 3376, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499424237, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065499425824, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.360086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.425992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:18:19 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:18:19.426013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:18:19 np0005580781 podman[239318]: 2026-01-10 17:18:19.551457622 +0000 UTC m=+0.065118177 container create 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:18:19 np0005580781 systemd[1]: Started libpod-conmon-2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51.scope.
Jan 10 12:18:19 np0005580781 podman[239318]: 2026-01-10 17:18:19.524734095 +0000 UTC m=+0.038394740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:18:19 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:18:19 np0005580781 podman[239318]: 2026-01-10 17:18:19.655923212 +0000 UTC m=+0.169583797 container init 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 10 12:18:19 np0005580781 podman[239318]: 2026-01-10 17:18:19.666499594 +0000 UTC m=+0.180160149 container start 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 12:18:19 np0005580781 podman[239318]: 2026-01-10 17:18:19.670755711 +0000 UTC m=+0.184416306 container attach 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 12:18:19 np0005580781 bold_lamarr[239335]: 167 167
Jan 10 12:18:19 np0005580781 systemd[1]: libpod-2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51.scope: Deactivated successfully.
Jan 10 12:18:19 np0005580781 podman[239340]: 2026-01-10 17:18:19.745250485 +0000 UTC m=+0.049970049 container died 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:18:19 np0005580781 systemd[1]: var-lib-containers-storage-overlay-799e33ae352f9998698c8af4e76b0639d3d9c0b9734684fbeb5dd2991244a9f7-merged.mount: Deactivated successfully.
Jan 10 12:18:19 np0005580781 podman[239340]: 2026-01-10 17:18:19.786651417 +0000 UTC m=+0.091370891 container remove 2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 12:18:19 np0005580781 systemd[1]: libpod-conmon-2ec0318f10b20410d964ecdd04c340b17d59ad9c8237f63a11ce7137e663ba51.scope: Deactivated successfully.
Jan 10 12:18:20 np0005580781 podman[239362]: 2026-01-10 17:18:20.056382334 +0000 UTC m=+0.075610535 container create 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 10 12:18:20 np0005580781 systemd[1]: Started libpod-conmon-0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f.scope.
Jan 10 12:18:20 np0005580781 podman[239362]: 2026-01-10 17:18:20.028443154 +0000 UTC m=+0.047671455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:18:20 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:18:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:20 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:20 np0005580781 podman[239362]: 2026-01-10 17:18:20.16465016 +0000 UTC m=+0.183878461 container init 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:18:20 np0005580781 podman[239362]: 2026-01-10 17:18:20.175152359 +0000 UTC m=+0.194380600 container start 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 12:18:20 np0005580781 podman[239362]: 2026-01-10 17:18:20.179105288 +0000 UTC m=+0.198333519 container attach 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:18:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:20 np0005580781 busy_shirley[239379]: {
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:    "0": [
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:        {
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "devices": [
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "/dev/loop3"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            ],
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_name": "ceph_lv0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_size": "21470642176",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "name": "ceph_lv0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "tags": {
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cluster_name": "ceph",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.crush_device_class": "",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.encrypted": "0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.objectstore": "bluestore",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osd_id": "0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.type": "block",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.vdo": "0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.with_tpm": "0"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            },
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "type": "block",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "vg_name": "ceph_vg0"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:        }
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:    ],
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:    "1": [
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:        {
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "devices": [
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "/dev/loop4"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            ],
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_name": "ceph_lv1",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_size": "21470642176",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "name": "ceph_lv1",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "tags": {
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cluster_name": "ceph",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.crush_device_class": "",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.encrypted": "0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.objectstore": "bluestore",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osd_id": "1",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.type": "block",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.vdo": "0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.with_tpm": "0"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            },
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "type": "block",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "vg_name": "ceph_vg1"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:        }
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:    ],
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:    "2": [
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:        {
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "devices": [
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "/dev/loop5"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            ],
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_name": "ceph_lv2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_size": "21470642176",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "name": "ceph_lv2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "tags": {
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.cluster_name": "ceph",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.crush_device_class": "",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.encrypted": "0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.objectstore": "bluestore",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osd_id": "2",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.type": "block",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.vdo": "0",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:                "ceph.with_tpm": "0"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            },
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "type": "block",
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:            "vg_name": "ceph_vg2"
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:        }
Jan 10 12:18:20 np0005580781 busy_shirley[239379]:    ]
Jan 10 12:18:20 np0005580781 busy_shirley[239379]: }
Jan 10 12:18:20 np0005580781 systemd[1]: libpod-0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f.scope: Deactivated successfully.
Jan 10 12:18:20 np0005580781 podman[239388]: 2026-01-10 17:18:20.590195033 +0000 UTC m=+0.029725370 container died 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 12:18:20 np0005580781 systemd[1]: var-lib-containers-storage-overlay-cba08d2caad62faea6e45729da53903965255a00c49d5df15a62a9dc584d85c9-merged.mount: Deactivated successfully.
Jan 10 12:18:20 np0005580781 podman[239388]: 2026-01-10 17:18:20.634179416 +0000 UTC m=+0.073709753 container remove 0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:18:20 np0005580781 systemd[1]: libpod-conmon-0e14d3b36f89cf50c551539fbb04d8e10cb86df0b4f6947575495f0b0170717f.scope: Deactivated successfully.
Jan 10 12:18:21 np0005580781 podman[239464]: 2026-01-10 17:18:21.241997925 +0000 UTC m=+0.072951163 container create 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 12:18:21 np0005580781 systemd[1]: Started libpod-conmon-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope.
Jan 10 12:18:21 np0005580781 podman[239464]: 2026-01-10 17:18:21.212180022 +0000 UTC m=+0.043133240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:18:21 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:18:21 np0005580781 podman[239464]: 2026-01-10 17:18:21.327513542 +0000 UTC m=+0.158466820 container init 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 12:18:21 np0005580781 podman[239464]: 2026-01-10 17:18:21.337515788 +0000 UTC m=+0.168468966 container start 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 12:18:21 np0005580781 podman[239464]: 2026-01-10 17:18:21.341990642 +0000 UTC m=+0.172943790 container attach 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:18:21 np0005580781 systemd[1]: libpod-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope: Deactivated successfully.
Jan 10 12:18:21 np0005580781 romantic_taussig[239480]: 167 167
Jan 10 12:18:21 np0005580781 conmon[239480]: conmon 5a397329ba6c216482f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope/container/memory.events
Jan 10 12:18:21 np0005580781 podman[239464]: 2026-01-10 17:18:21.344291585 +0000 UTC m=+0.175244743 container died 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:18:21 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4bd51b188ba02604762d25eb661fad88ae5e887422816707c79e157fed427014-merged.mount: Deactivated successfully.
Jan 10 12:18:21 np0005580781 podman[239464]: 2026-01-10 17:18:21.389101541 +0000 UTC m=+0.220054709 container remove 5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 12:18:21 np0005580781 systemd[1]: libpod-conmon-5a397329ba6c216482f0d3a513a53b0938a33b74e92289a5c5082811cbe9b6df.scope: Deactivated successfully.
Jan 10 12:18:21 np0005580781 podman[239505]: 2026-01-10 17:18:21.59251827 +0000 UTC m=+0.069710784 container create 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 10 12:18:21 np0005580781 systemd[1]: Started libpod-conmon-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope.
Jan 10 12:18:21 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:18:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:18:21 np0005580781 podman[239505]: 2026-01-10 17:18:21.565508245 +0000 UTC m=+0.042700859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:18:21 np0005580781 podman[239505]: 2026-01-10 17:18:21.671185999 +0000 UTC m=+0.148378613 container init 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 12:18:21 np0005580781 podman[239505]: 2026-01-10 17:18:21.67993118 +0000 UTC m=+0.157123734 container start 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:18:21 np0005580781 podman[239505]: 2026-01-10 17:18:21.684791874 +0000 UTC m=+0.161984418 container attach 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:18:22 np0005580781 lvm[239600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:18:22 np0005580781 lvm[239599]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:18:22 np0005580781 lvm[239600]: VG ceph_vg1 finished
Jan 10 12:18:22 np0005580781 lvm[239599]: VG ceph_vg0 finished
Jan 10 12:18:22 np0005580781 lvm[239601]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:18:22 np0005580781 lvm[239601]: VG ceph_vg2 finished
Jan 10 12:18:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:22 np0005580781 happy_cori[239520]: {}
Jan 10 12:18:22 np0005580781 systemd[1]: libpod-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope: Deactivated successfully.
Jan 10 12:18:22 np0005580781 podman[239505]: 2026-01-10 17:18:22.570152266 +0000 UTC m=+1.047344780 container died 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:18:22 np0005580781 systemd[1]: libpod-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope: Consumed 1.399s CPU time.
Jan 10 12:18:22 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a2e466b67a0d647ec4a92636bb17b3395c6660819a84b98339119bfad232b643-merged.mount: Deactivated successfully.
Jan 10 12:18:22 np0005580781 podman[239505]: 2026-01-10 17:18:22.746717034 +0000 UTC m=+1.223909558 container remove 173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:18:22 np0005580781 systemd[1]: libpod-conmon-173ab1378d17f2f33bc074ade0e459888e5587a38a8cf351fe305d276c1df1f1.scope: Deactivated successfully.
Jan 10 12:18:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:18:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:18:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:18:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:18:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4379 writes, 20K keys, 4379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4379 writes, 468 syncs, 9.36 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f2dc198d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000197 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown,
Jan 10 12:18:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:32 np0005580781 podman[239643]: 2026-01-10 17:18:32.124919347 +0000 UTC m=+0.114349584 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 10 12:18:32 np0005580781 podman[239644]: 2026-01-10 17:18:32.167225603 +0000 UTC m=+0.154804599 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:18:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:18:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, i
Jan 10 12:18:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:18:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046424119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:18:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:18:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046424119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:18:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:18:38
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['vms', 'images', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:18:38 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:18:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:18:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:18:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown
Jan 10 12:18:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.302004027771843e-07 of space, bias 4.0, pg target 0.0011162404833326212 quantized to 16 (current 16)
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:18:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:18:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:48 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 12:18:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:18:48.920 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:18:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:18:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:18:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:18:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:18:49 np0005580781 nova_compute[237049]: 2026-01-10 17:18:49.336 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:49 np0005580781 nova_compute[237049]: 2026-01-10 17:18:49.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:49 np0005580781 nova_compute[237049]: 2026-01-10 17:18:49.345 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:18:49 np0005580781 nova_compute[237049]: 2026-01-10 17:18:49.345 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:18:49 np0005580781 nova_compute[237049]: 2026-01-10 17:18:49.451 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:18:50 np0005580781 nova_compute[237049]: 2026-01-10 17:18:50.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:50 np0005580781 nova_compute[237049]: 2026-01-10 17:18:50.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:50 np0005580781 nova_compute[237049]: 2026-01-10 17:18:50.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:50 np0005580781 nova_compute[237049]: 2026-01-10 17:18:50.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:50 np0005580781 nova_compute[237049]: 2026-01-10 17:18:50.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:18:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.382 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.383 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.383 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.384 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.384 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:18:51 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:18:51 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506908907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:18:51 np0005580781 nova_compute[237049]: 2026-01-10 17:18:51.996 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.230 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.232 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5286MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.232 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.233 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.310 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.310 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.329 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:18:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:52 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:18:52 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637204984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.904 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.910 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.931 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.934 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:18:52 np0005580781 nova_compute[237049]: 2026-01-10 17:18:52.935 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:18:53 np0005580781 nova_compute[237049]: 2026-01-10 17:18:53.935 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:18:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:18:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:18:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:03 np0005580781 podman[239733]: 2026-01-10 17:19:03.094415741 +0000 UTC m=+0.080895871 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 10 12:19:03 np0005580781 podman[239734]: 2026-01-10 17:19:03.160225678 +0000 UTC m=+0.153758795 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 10 12:19:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:08 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:19:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:19:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:19:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:19:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:19:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:19:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:19:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:19:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:24 np0005580781 podman[239918]: 2026-01-10 17:19:24.374666668 +0000 UTC m=+0.062127816 container create bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:19:24 np0005580781 systemd[1]: Started libpod-conmon-bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651.scope.
Jan 10 12:19:24 np0005580781 podman[239918]: 2026-01-10 17:19:24.353105786 +0000 UTC m=+0.040566994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:19:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:19:24 np0005580781 podman[239918]: 2026-01-10 17:19:24.5011088 +0000 UTC m=+0.188569988 container init bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 12:19:24 np0005580781 podman[239918]: 2026-01-10 17:19:24.511603063 +0000 UTC m=+0.199064221 container start bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:19:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:24 np0005580781 podman[239918]: 2026-01-10 17:19:24.516201111 +0000 UTC m=+0.203662309 container attach bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:19:24 np0005580781 naughty_napier[239935]: 167 167
Jan 10 12:19:24 np0005580781 systemd[1]: libpod-bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651.scope: Deactivated successfully.
Jan 10 12:19:24 np0005580781 podman[239918]: 2026-01-10 17:19:24.522305172 +0000 UTC m=+0.209766320 container died bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:19:24 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b88f179598aba8e2980910a4320a947d19b5738f9664e388c841d42d3bad2e32-merged.mount: Deactivated successfully.
Jan 10 12:19:24 np0005580781 podman[239918]: 2026-01-10 17:19:24.579564401 +0000 UTC m=+0.267025549 container remove bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 12:19:24 np0005580781 systemd[1]: libpod-conmon-bdc3362d52fdb6d961997ef1eff93f3d029420ca2ed5c8e3370a8b9f39f20651.scope: Deactivated successfully.
Jan 10 12:19:24 np0005580781 podman[239960]: 2026-01-10 17:19:24.801621132 +0000 UTC m=+0.065785638 container create 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 12:19:24 np0005580781 systemd[1]: Started libpod-conmon-10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72.scope.
Jan 10 12:19:24 np0005580781 podman[239960]: 2026-01-10 17:19:24.778410584 +0000 UTC m=+0.042575070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:19:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:19:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:24 np0005580781 podman[239960]: 2026-01-10 17:19:24.927556939 +0000 UTC m=+0.191721445 container init 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 10 12:19:24 np0005580781 podman[239960]: 2026-01-10 17:19:24.942209848 +0000 UTC m=+0.206374354 container start 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:19:24 np0005580781 podman[239960]: 2026-01-10 17:19:24.947125996 +0000 UTC m=+0.211290562 container attach 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 12:19:25 np0005580781 quizzical_gagarin[239976]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:19:25 np0005580781 quizzical_gagarin[239976]: --> All data devices are unavailable
Jan 10 12:19:25 np0005580781 systemd[1]: libpod-10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72.scope: Deactivated successfully.
Jan 10 12:19:25 np0005580781 podman[239960]: 2026-01-10 17:19:25.604200705 +0000 UTC m=+0.868365271 container died 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:19:25 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3102a5280c3c8099b38050107b075a9ed0ebb1b45ac4df6ba3aef8455a7420c2-merged.mount: Deactivated successfully.
Jan 10 12:19:25 np0005580781 podman[239960]: 2026-01-10 17:19:25.667919885 +0000 UTC m=+0.932084361 container remove 10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 12:19:25 np0005580781 systemd[1]: libpod-conmon-10bbf51fa2e65fcd5003cf4f18e9ee137771664bb2d8a1200658761e93dbda72.scope: Deactivated successfully.
Jan 10 12:19:26 np0005580781 podman[240068]: 2026-01-10 17:19:26.194075589 +0000 UTC m=+0.072664331 container create 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:19:26 np0005580781 systemd[1]: Started libpod-conmon-58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312.scope.
Jan 10 12:19:26 np0005580781 podman[240068]: 2026-01-10 17:19:26.165988924 +0000 UTC m=+0.044577716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:19:26 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:19:26 np0005580781 podman[240068]: 2026-01-10 17:19:26.272914241 +0000 UTC m=+0.151503003 container init 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 12:19:26 np0005580781 podman[240068]: 2026-01-10 17:19:26.279633178 +0000 UTC m=+0.158221920 container start 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:19:26 np0005580781 podman[240068]: 2026-01-10 17:19:26.283449705 +0000 UTC m=+0.162038447 container attach 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:19:26 np0005580781 elated_bardeen[240085]: 167 167
Jan 10 12:19:26 np0005580781 systemd[1]: libpod-58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312.scope: Deactivated successfully.
Jan 10 12:19:26 np0005580781 podman[240068]: 2026-01-10 17:19:26.285975725 +0000 UTC m=+0.164564427 container died 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 10 12:19:26 np0005580781 systemd[1]: var-lib-containers-storage-overlay-2b3d661a01937e737fdac9bebbdfe41422e977c5e424381b0268aaaa764e8163-merged.mount: Deactivated successfully.
Jan 10 12:19:26 np0005580781 podman[240068]: 2026-01-10 17:19:26.323957186 +0000 UTC m=+0.202545918 container remove 58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bardeen, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:19:26 np0005580781 systemd[1]: libpod-conmon-58b87e0d44cc79f4b960139c6018452e58a5ca8e5e1250840d829a64b2bbb312.scope: Deactivated successfully.
Jan 10 12:19:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:26 np0005580781 podman[240107]: 2026-01-10 17:19:26.534977079 +0000 UTC m=+0.063856004 container create a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:19:26 np0005580781 systemd[1]: Started libpod-conmon-a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd.scope.
Jan 10 12:19:26 np0005580781 podman[240107]: 2026-01-10 17:19:26.508740927 +0000 UTC m=+0.037619922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:19:26 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:19:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:26 np0005580781 podman[240107]: 2026-01-10 17:19:26.644807276 +0000 UTC m=+0.173686291 container init a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:19:26 np0005580781 podman[240107]: 2026-01-10 17:19:26.658989173 +0000 UTC m=+0.187868128 container start a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:19:26 np0005580781 podman[240107]: 2026-01-10 17:19:26.663990882 +0000 UTC m=+0.192869887 container attach a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]: {
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:    "0": [
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:        {
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "devices": [
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "/dev/loop3"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            ],
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_name": "ceph_lv0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_size": "21470642176",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "name": "ceph_lv0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "tags": {
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cluster_name": "ceph",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.crush_device_class": "",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.encrypted": "0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.objectstore": "bluestore",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osd_id": "0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.type": "block",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.vdo": "0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.with_tpm": "0"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            },
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "type": "block",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "vg_name": "ceph_vg0"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:        }
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:    ],
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:    "1": [
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:        {
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "devices": [
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "/dev/loop4"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            ],
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_name": "ceph_lv1",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_size": "21470642176",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "name": "ceph_lv1",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "tags": {
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cluster_name": "ceph",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.crush_device_class": "",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.encrypted": "0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.objectstore": "bluestore",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osd_id": "1",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.type": "block",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.vdo": "0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.with_tpm": "0"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            },
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "type": "block",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "vg_name": "ceph_vg1"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:        }
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:    ],
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:    "2": [
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:        {
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "devices": [
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "/dev/loop5"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            ],
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_name": "ceph_lv2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_size": "21470642176",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "name": "ceph_lv2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "tags": {
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.cluster_name": "ceph",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.crush_device_class": "",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.encrypted": "0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.objectstore": "bluestore",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osd_id": "2",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.type": "block",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.vdo": "0",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:                "ceph.with_tpm": "0"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            },
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "type": "block",
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:            "vg_name": "ceph_vg2"
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:        }
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]:    ]
Jan 10 12:19:26 np0005580781 gracious_robinson[240123]: }
Jan 10 12:19:27 np0005580781 systemd[1]: libpod-a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd.scope: Deactivated successfully.
Jan 10 12:19:27 np0005580781 podman[240132]: 2026-01-10 17:19:27.080855264 +0000 UTC m=+0.038088845 container died a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:19:27 np0005580781 systemd[1]: var-lib-containers-storage-overlay-b66e2437ae4c2704af21c9d4bd7aef803bae28c537fa62ea8654aff7710dfc5c-merged.mount: Deactivated successfully.
Jan 10 12:19:27 np0005580781 podman[240132]: 2026-01-10 17:19:27.135685395 +0000 UTC m=+0.092918876 container remove a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:19:27 np0005580781 systemd[1]: libpod-conmon-a4cb2fb2335858411f80ed43a95c208bb7207b0be66d43ee5eb4402a1ee0c4dd.scope: Deactivated successfully.
Jan 10 12:19:27 np0005580781 podman[240211]: 2026-01-10 17:19:27.674945686 +0000 UTC m=+0.055007268 container create 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:19:27 np0005580781 systemd[1]: Started libpod-conmon-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope.
Jan 10 12:19:27 np0005580781 podman[240211]: 2026-01-10 17:19:27.651415988 +0000 UTC m=+0.031477560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:19:27 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:19:27 np0005580781 podman[240211]: 2026-01-10 17:19:27.77248385 +0000 UTC m=+0.152545452 container init 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 12:19:27 np0005580781 podman[240211]: 2026-01-10 17:19:27.782238382 +0000 UTC m=+0.162299954 container start 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 12:19:27 np0005580781 podman[240211]: 2026-01-10 17:19:27.787289693 +0000 UTC m=+0.167351265 container attach 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:19:27 np0005580781 systemd[1]: libpod-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope: Deactivated successfully.
Jan 10 12:19:27 np0005580781 angry_ride[240227]: 167 167
Jan 10 12:19:27 np0005580781 podman[240211]: 2026-01-10 17:19:27.78933375 +0000 UTC m=+0.169395332 container died 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:19:27 np0005580781 conmon[240227]: conmon 60d1bb2fd33f115261fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope/container/memory.events
Jan 10 12:19:27 np0005580781 systemd[1]: var-lib-containers-storage-overlay-30a4c2041e86a141d794cb4353c659acea9f4ea5ffd6d896d4627cc1a6d52169-merged.mount: Deactivated successfully.
Jan 10 12:19:27 np0005580781 podman[240211]: 2026-01-10 17:19:27.831988582 +0000 UTC m=+0.212050134 container remove 60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:19:27 np0005580781 systemd[1]: libpod-conmon-60d1bb2fd33f115261fc201e29c286c511789c5d65f821d24067491be9921b31.scope: Deactivated successfully.
Jan 10 12:19:28 np0005580781 podman[240250]: 2026-01-10 17:19:28.075091831 +0000 UTC m=+0.064795221 container create 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 12:19:28 np0005580781 systemd[1]: Started libpod-conmon-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope.
Jan 10 12:19:28 np0005580781 podman[240250]: 2026-01-10 17:19:28.050847734 +0000 UTC m=+0.040551184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:19:28 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:19:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:19:28 np0005580781 podman[240250]: 2026-01-10 17:19:28.185435862 +0000 UTC m=+0.175139262 container init 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:19:28 np0005580781 podman[240250]: 2026-01-10 17:19:28.196847121 +0000 UTC m=+0.186550521 container start 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:19:28 np0005580781 podman[240250]: 2026-01-10 17:19:28.201925273 +0000 UTC m=+0.191628673 container attach 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 12:19:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:28 np0005580781 lvm[240347]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:19:28 np0005580781 lvm[240347]: VG ceph_vg1 finished
Jan 10 12:19:28 np0005580781 lvm[240345]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:19:28 np0005580781 lvm[240345]: VG ceph_vg0 finished
Jan 10 12:19:28 np0005580781 lvm[240348]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:19:28 np0005580781 lvm[240348]: VG ceph_vg2 finished
Jan 10 12:19:29 np0005580781 zen_jepsen[240266]: {}
Jan 10 12:19:29 np0005580781 systemd[1]: libpod-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope: Deactivated successfully.
Jan 10 12:19:29 np0005580781 systemd[1]: libpod-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope: Consumed 1.311s CPU time.
Jan 10 12:19:29 np0005580781 podman[240250]: 2026-01-10 17:19:29.038427964 +0000 UTC m=+1.028131354 container died 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 10 12:19:29 np0005580781 systemd[1]: var-lib-containers-storage-overlay-907351faf6aeeec66a0351f7b788893422b972c0e4da53feb9ba56a2047629ae-merged.mount: Deactivated successfully.
Jan 10 12:19:29 np0005580781 podman[240250]: 2026-01-10 17:19:29.08305243 +0000 UTC m=+1.072755800 container remove 6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 10 12:19:29 np0005580781 systemd[1]: libpod-conmon-6a4201c776d675dbc473fa5aaa4275e1ebe090ac05e400ae4f71ec4a5ca92502.scope: Deactivated successfully.
Jan 10 12:19:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:19:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:19:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:19:29 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:19:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:29 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:19:29 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:19:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:34 np0005580781 podman[240388]: 2026-01-10 17:19:34.083079386 +0000 UTC m=+0.076578990 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 10 12:19:34 np0005580781 podman[240389]: 2026-01-10 17:19:34.133027831 +0000 UTC m=+0.126292398 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 12:19:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:19:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945124111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:19:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:19:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945124111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:19:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:19:38
Jan 10 12:19:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:19:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:19:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'backups', 'volumes', 'cephfs.cephfs.meta']
Jan 10 12:19:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:19:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:19:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:19:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 10 12:19:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 10 12:19:39 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 10 12:19:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Jan 10 12:19:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 10 12:19:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 10 12:19:41 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 10 12:19:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 10 12:19:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 10 12:19:42 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 10 12:19:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 8.4 MiB data, 89 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s wr, 0 op/s
Jan 10 12:19:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 10 12:19:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 10 12:19:44 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 10 12:19:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 8.4 MiB data, 89 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s wr, 0 op/s
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00013036095103210262 of space, bias 1.0, pg target 0.039108285309630786 quantized to 32 (current 32)
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.310698069856682e-07 of space, bias 4.0, pg target 0.0011172837683828018 quantized to 16 (current 16)
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:19:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:19:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 6.8 MiB/s wr, 43 op/s
Jan 10 12:19:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 50 op/s
Jan 10 12:19:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:19:48.921 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:19:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:19:48.921 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:19:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:19:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:19:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 10 12:19:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 10 12:19:49 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 10 12:19:50 np0005580781 nova_compute[237049]: 2026-01-10 17:19:50.334 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:50 np0005580781 nova_compute[237049]: 2026-01-10 17:19:50.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:50 np0005580781 nova_compute[237049]: 2026-01-10 17:19:50.345 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:19:50 np0005580781 nova_compute[237049]: 2026-01-10 17:19:50.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:19:50 np0005580781 nova_compute[237049]: 2026-01-10 17:19:50.375 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:19:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 10 12:19:51 np0005580781 nova_compute[237049]: 2026-01-10 17:19:51.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:52 np0005580781 nova_compute[237049]: 2026-01-10 17:19:52.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:52 np0005580781 nova_compute[237049]: 2026-01-10 17:19:52.351 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:52 np0005580781 nova_compute[237049]: 2026-01-10 17:19:52.351 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:52 np0005580781 nova_compute[237049]: 2026-01-10 17:19:52.351 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:52 np0005580781 nova_compute[237049]: 2026-01-10 17:19:52.352 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:52 np0005580781 nova_compute[237049]: 2026-01-10 17:19:52.352 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:19:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.0 MiB/s wr, 45 op/s
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.378 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.378 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.378 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:19:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:19:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075959427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:19:53 np0005580781 nova_compute[237049]: 2026-01-10 17:19:53.970 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.162 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.163 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5278MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.163 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.164 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.245 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.246 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.278 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:19:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:19:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.3 MiB/s wr, 37 op/s
Jan 10 12:19:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:19:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280756129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.874 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.880 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.897 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.898 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:19:54 np0005580781 nova_compute[237049]: 2026-01-10 17:19:54.898 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:19:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 715 B/s wr, 12 op/s
Jan 10 12:19:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:19:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:01 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:20:01.729 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:b5:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:56:cf:00:80:b3'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 10 12:20:01 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:20:01.731 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 10 12:20:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:03 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:20:03.735 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 10 12:20:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:05 np0005580781 podman[240471]: 2026-01-10 17:20:05.076949551 +0000 UTC m=+0.070937076 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 12:20:05 np0005580781 podman[240472]: 2026-01-10 17:20:05.11108639 +0000 UTC m=+0.107218486 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 12:20:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:20:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:20:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:20:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:20:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:20:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:20:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:20:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 10 12:20:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 10 12:20:26 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 10 12:20:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 10 12:20:27 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 10 12:20:27 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 10 12:20:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 255 B/s wr, 0 op/s
Jan 10 12:20:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 10 12:20:28 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 10 12:20:28 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 10 12:20:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:20:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.3 KiB/s wr, 76 op/s
Jan 10 12:20:30 np0005580781 podman[240660]: 2026-01-10 17:20:30.653071525 +0000 UTC m=+0.054484522 container create 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 12:20:30 np0005580781 systemd[1]: Started libpod-conmon-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope.
Jan 10 12:20:30 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:20:30 np0005580781 podman[240660]: 2026-01-10 17:20:30.636097894 +0000 UTC m=+0.037510921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:20:30 np0005580781 podman[240660]: 2026-01-10 17:20:30.748308423 +0000 UTC m=+0.149721510 container init 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 12:20:30 np0005580781 podman[240660]: 2026-01-10 17:20:30.756648259 +0000 UTC m=+0.158061296 container start 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:20:30 np0005580781 podman[240660]: 2026-01-10 17:20:30.760943751 +0000 UTC m=+0.162356858 container attach 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 12:20:30 np0005580781 systemd[1]: libpod-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope: Deactivated successfully.
Jan 10 12:20:30 np0005580781 reverent_hopper[240676]: 167 167
Jan 10 12:20:30 np0005580781 conmon[240676]: conmon 594228df637c5125985b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope/container/memory.events
Jan 10 12:20:30 np0005580781 podman[240660]: 2026-01-10 17:20:30.767981373 +0000 UTC m=+0.169394380 container died 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:20:30 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1221e55b60758a1ce757af36c64b3bbe9dda4c19c776f6849e3c429481d79fdc-merged.mount: Deactivated successfully.
Jan 10 12:20:30 np0005580781 podman[240660]: 2026-01-10 17:20:30.821630637 +0000 UTC m=+0.223043664 container remove 594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:20:30 np0005580781 systemd[1]: libpod-conmon-594228df637c5125985becb01f6488c6a269bb6aa094b8bea1190848b24aad59.scope: Deactivated successfully.
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 10 12:20:30 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 10 12:20:31 np0005580781 podman[240699]: 2026-01-10 17:20:31.066314746 +0000 UTC m=+0.042729264 container create ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 10 12:20:31 np0005580781 systemd[1]: Started libpod-conmon-ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922.scope.
Jan 10 12:20:31 np0005580781 podman[240699]: 2026-01-10 17:20:31.049874432 +0000 UTC m=+0.026288960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:20:31 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:20:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:31 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:31 np0005580781 podman[240699]: 2026-01-10 17:20:31.161673458 +0000 UTC m=+0.138088036 container init ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:20:31 np0005580781 podman[240699]: 2026-01-10 17:20:31.171645288 +0000 UTC m=+0.148059836 container start ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:20:31 np0005580781 podman[240699]: 2026-01-10 17:20:31.175394772 +0000 UTC m=+0.151809370 container attach ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 12:20:31 np0005580781 magical_feynman[240716]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:20:31 np0005580781 magical_feynman[240716]: --> All data devices are unavailable
Jan 10 12:20:31 np0005580781 systemd[1]: libpod-ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922.scope: Deactivated successfully.
Jan 10 12:20:31 np0005580781 podman[240699]: 2026-01-10 17:20:31.746729269 +0000 UTC m=+0.723143787 container died ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:20:31 np0005580781 systemd[1]: var-lib-containers-storage-overlay-5247d9d152d236770a0a0e2ae91dc7d16a6f72458b639ef5b6c4f102ee8f7cf6-merged.mount: Deactivated successfully.
Jan 10 12:20:31 np0005580781 podman[240699]: 2026-01-10 17:20:31.794221439 +0000 UTC m=+0.770635977 container remove ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_feynman, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:20:31 np0005580781 systemd[1]: libpod-conmon-ef06f0cf83e4362f88aef48355da021a899ea5962e0fd0df01f41a7e88f3a922.scope: Deactivated successfully.
Jan 10 12:20:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 10 12:20:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 10 12:20:32 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 10 12:20:32 np0005580781 podman[240809]: 2026-01-10 17:20:32.360247421 +0000 UTC m=+0.063032985 container create 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:20:32 np0005580781 systemd[1]: Started libpod-conmon-1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031.scope.
Jan 10 12:20:32 np0005580781 podman[240809]: 2026-01-10 17:20:32.334540371 +0000 UTC m=+0.037325975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:20:32 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:20:32 np0005580781 podman[240809]: 2026-01-10 17:20:32.451075183 +0000 UTC m=+0.153860777 container init 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:20:32 np0005580781 podman[240809]: 2026-01-10 17:20:32.460691981 +0000 UTC m=+0.163477585 container start 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:20:32 np0005580781 podman[240809]: 2026-01-10 17:20:32.465759249 +0000 UTC m=+0.168544853 container attach 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:20:32 np0005580781 reverent_haslett[240826]: 167 167
Jan 10 12:20:32 np0005580781 systemd[1]: libpod-1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031.scope: Deactivated successfully.
Jan 10 12:20:32 np0005580781 podman[240809]: 2026-01-10 17:20:32.467446644 +0000 UTC m=+0.170232258 container died 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 10 12:20:32 np0005580781 systemd[1]: var-lib-containers-storage-overlay-38bb5680184a3e82fb62090727af5a5b31649fa705278874e7185280d5f0b431-merged.mount: Deactivated successfully.
Jan 10 12:20:32 np0005580781 podman[240809]: 2026-01-10 17:20:32.515475582 +0000 UTC m=+0.218261186 container remove 1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:20:32 np0005580781 systemd[1]: libpod-conmon-1514bfd898c28cc14ab99d3e0fb3ba52fc46f73346e400cb222d515296a27031.scope: Deactivated successfully.
Jan 10 12:20:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 16 KiB/s wr, 207 op/s
Jan 10 12:20:32 np0005580781 podman[240851]: 2026-01-10 17:20:32.744661269 +0000 UTC m=+0.065121314 container create ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:20:32 np0005580781 systemd[1]: Started libpod-conmon-ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058.scope.
Jan 10 12:20:32 np0005580781 podman[240851]: 2026-01-10 17:20:32.718274346 +0000 UTC m=+0.038734421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:20:32 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:20:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:32 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:32 np0005580781 podman[240851]: 2026-01-10 17:20:32.864577963 +0000 UTC m=+0.185038078 container init ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:20:32 np0005580781 podman[240851]: 2026-01-10 17:20:32.875834585 +0000 UTC m=+0.196294630 container start ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:20:32 np0005580781 podman[240851]: 2026-01-10 17:20:32.970564277 +0000 UTC m=+0.291024342 container attach ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]: {
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:    "0": [
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:        {
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "devices": [
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "/dev/loop3"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            ],
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_name": "ceph_lv0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_size": "21470642176",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "name": "ceph_lv0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "tags": {
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cluster_name": "ceph",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.crush_device_class": "",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.encrypted": "0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.objectstore": "bluestore",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osd_id": "0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.type": "block",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.vdo": "0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.with_tpm": "0"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            },
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "type": "block",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "vg_name": "ceph_vg0"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:        }
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:    ],
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:    "1": [
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:        {
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "devices": [
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "/dev/loop4"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            ],
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_name": "ceph_lv1",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_size": "21470642176",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "name": "ceph_lv1",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "tags": {
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cluster_name": "ceph",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.crush_device_class": "",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.encrypted": "0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.objectstore": "bluestore",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osd_id": "1",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.type": "block",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.vdo": "0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.with_tpm": "0"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            },
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "type": "block",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "vg_name": "ceph_vg1"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:        }
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:    ],
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:    "2": [
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:        {
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "devices": [
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "/dev/loop5"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            ],
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_name": "ceph_lv2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_size": "21470642176",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "name": "ceph_lv2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "tags": {
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.cluster_name": "ceph",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.crush_device_class": "",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.encrypted": "0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.objectstore": "bluestore",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osd_id": "2",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.type": "block",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.vdo": "0",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:                "ceph.with_tpm": "0"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            },
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "type": "block",
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:            "vg_name": "ceph_vg2"
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:        }
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]:    ]
Jan 10 12:20:33 np0005580781 mystifying_heyrovsky[240867]: }
Jan 10 12:20:33 np0005580781 systemd[1]: libpod-ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058.scope: Deactivated successfully.
Jan 10 12:20:33 np0005580781 podman[240851]: 2026-01-10 17:20:33.224433159 +0000 UTC m=+0.544893204 container died ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 10 12:20:33 np0005580781 systemd[1]: var-lib-containers-storage-overlay-8c2deab5d650d3f17743ee054fc1f7ef454023dbc0f5cec0a6458639194fb6d7-merged.mount: Deactivated successfully.
Jan 10 12:20:33 np0005580781 podman[240851]: 2026-01-10 17:20:33.280357728 +0000 UTC m=+0.600817753 container remove ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 10 12:20:33 np0005580781 systemd[1]: libpod-conmon-ad6cfa723908408f932e5d90a8036b80b545a4fe0234425913b34b13feb66058.scope: Deactivated successfully.
Jan 10 12:20:33 np0005580781 podman[240950]: 2026-01-10 17:20:33.830663979 +0000 UTC m=+0.061644659 container create 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:20:33 np0005580781 systemd[1]: Started libpod-conmon-356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4.scope.
Jan 10 12:20:33 np0005580781 podman[240950]: 2026-01-10 17:20:33.797653158 +0000 UTC m=+0.028633898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:20:33 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:20:33 np0005580781 podman[240950]: 2026-01-10 17:20:33.929412494 +0000 UTC m=+0.160393214 container init 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:20:33 np0005580781 podman[240950]: 2026-01-10 17:20:33.941217824 +0000 UTC m=+0.172198474 container start 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 12:20:33 np0005580781 podman[240950]: 2026-01-10 17:20:33.94563143 +0000 UTC m=+0.176612150 container attach 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:20:33 np0005580781 charming_euler[240966]: 167 167
Jan 10 12:20:33 np0005580781 systemd[1]: libpod-356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4.scope: Deactivated successfully.
Jan 10 12:20:33 np0005580781 podman[240950]: 2026-01-10 17:20:33.948815286 +0000 UTC m=+0.179795956 container died 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 10 12:20:33 np0005580781 systemd[1]: var-lib-containers-storage-overlay-1fde6b052b243de59a46030432a20b254fa172135cd371c67d443bd86082f79c-merged.mount: Deactivated successfully.
Jan 10 12:20:33 np0005580781 podman[240950]: 2026-01-10 17:20:33.994149645 +0000 UTC m=+0.225130295 container remove 356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:20:34 np0005580781 systemd[1]: libpod-conmon-356d3d66a21544b0cd6a14d2f24960d9eb45ad554d8ab26ffeeb27c13169fbd4.scope: Deactivated successfully.
Jan 10 12:20:34 np0005580781 podman[240990]: 2026-01-10 17:20:34.160639758 +0000 UTC m=+0.041337487 container create 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:20:34 np0005580781 systemd[1]: Started libpod-conmon-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope.
Jan 10 12:20:34 np0005580781 podman[240990]: 2026-01-10 17:20:34.142688125 +0000 UTC m=+0.023385874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:20:34 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:20:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:34 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:20:34 np0005580781 podman[240990]: 2026-01-10 17:20:34.259172996 +0000 UTC m=+0.139870745 container init 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:20:34 np0005580781 podman[240990]: 2026-01-10 17:20:34.271863015 +0000 UTC m=+0.152560754 container start 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:20:34 np0005580781 podman[240990]: 2026-01-10 17:20:34.275269138 +0000 UTC m=+0.155966897 container attach 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:20:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 10 12:20:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 10 12:20:34 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 10 12:20:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 10 12:20:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 10 12:20:34 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 10 12:20:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 9.2 KiB/s wr, 127 op/s
Jan 10 12:20:34 np0005580781 lvm[241083]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:20:34 np0005580781 lvm[241085]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:20:34 np0005580781 lvm[241085]: VG ceph_vg1 finished
Jan 10 12:20:34 np0005580781 lvm[241083]: VG ceph_vg0 finished
Jan 10 12:20:34 np0005580781 lvm[241087]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:20:34 np0005580781 lvm[241087]: VG ceph_vg2 finished
Jan 10 12:20:35 np0005580781 lvm[241088]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:20:35 np0005580781 lvm[241088]: VG ceph_vg1 finished
Jan 10 12:20:35 np0005580781 wizardly_poincare[241006]: {}
Jan 10 12:20:35 np0005580781 systemd[1]: libpod-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope: Deactivated successfully.
Jan 10 12:20:35 np0005580781 systemd[1]: libpod-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope: Consumed 1.352s CPU time.
Jan 10 12:20:35 np0005580781 podman[240990]: 2026-01-10 17:20:35.114192661 +0000 UTC m=+0.994890420 container died 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:20:35 np0005580781 systemd[1]: var-lib-containers-storage-overlay-96ad079141dadcd1c070da3ceaaa40c6f365c06fb61783f2cb6e697b7f522f6a-merged.mount: Deactivated successfully.
Jan 10 12:20:35 np0005580781 podman[240990]: 2026-01-10 17:20:35.18280483 +0000 UTC m=+1.063502559 container remove 882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 12:20:35 np0005580781 systemd[1]: libpod-conmon-882bcdb8a175ef9fc546001426942bc4115ee80443a1d307cb171c91bd9d6bae.scope: Deactivated successfully.
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:20:35 np0005580781 podman[241092]: 2026-01-10 17:20:35.256601279 +0000 UTC m=+0.102389326 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:20:35 np0005580781 podman[241100]: 2026-01-10 17:20:35.262279677 +0000 UTC m=+0.109540772 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.390768) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635391030, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1488, "num_deletes": 251, "total_data_size": 1571170, "memory_usage": 1602816, "flush_reason": "Manual Compaction"}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635407882, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1528471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14324, "largest_seqno": 15811, "table_properties": {"data_size": 1521421, "index_size": 4125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14420, "raw_average_key_size": 19, "raw_value_size": 1507279, "raw_average_value_size": 2079, "num_data_blocks": 188, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065499, "oldest_key_time": 1768065499, "file_creation_time": 1768065635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17128 microseconds, and 10542 cpu microseconds.
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.408010) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1528471 bytes OK
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.408052) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.409819) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.409848) EVENT_LOG_v1 {"time_micros": 1768065635409842, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.409876) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1564619, prev total WAL file size 1564619, number of live WAL files 2.
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.410898) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1492KB)], [35(5495KB)]
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635411001, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 7155428, "oldest_snapshot_seqno": -1}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3583 keys, 5955592 bytes, temperature: kUnknown
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635453019, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5955592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5928449, "index_size": 17113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 84755, "raw_average_key_size": 23, "raw_value_size": 5860824, "raw_average_value_size": 1635, "num_data_blocks": 736, "num_entries": 3583, "num_filter_entries": 3583, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.453440) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5955592 bytes
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.454992) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 141.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 5.4 +0.0 blob) out(5.7 +0.0 blob), read-write-amplify(8.6) write-amplify(3.9) OK, records in: 4101, records dropped: 518 output_compression: NoCompression
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.455022) EVENT_LOG_v1 {"time_micros": 1768065635455008, "job": 16, "event": "compaction_finished", "compaction_time_micros": 42179, "compaction_time_cpu_micros": 19546, "output_level": 6, "num_output_files": 1, "total_output_size": 5955592, "num_input_records": 4101, "num_output_records": 3583, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635455660, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065635457844, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.410638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:20:35 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:20:35.457975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:20:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:20:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2838137479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:20:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:20:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2838137479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:20:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 10 12:20:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 10 12:20:36 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 10 12:20:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 11 KiB/s wr, 168 op/s
Jan 10 12:20:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 10 12:20:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 10 12:20:37 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 10 12:20:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:20:38
Jan 10 12:20:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:20:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:20:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['backups', 'volumes', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 10 12:20:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:20:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 10 12:20:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 10 12:20:38 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 10 12:20:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 18 KiB/s wr, 206 op/s
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:20:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:20:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 10 12:20:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 10 12:20:39 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 10 12:20:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 65 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.8 MiB/s wr, 52 op/s
Jan 10 12:20:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 10 12:20:41 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 10 12:20:41 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 10 12:20:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 105 MiB data, 163 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 12 MiB/s wr, 189 op/s
Jan 10 12:20:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 10 12:20:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 10 12:20:42 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 10 12:20:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 10 12:20:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 10 12:20:43 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 10 12:20:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0017108615739368682 of space, bias 1.0, pg target 0.5132584721810605 quantized to 32 (current 32)
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.068536806985287e-07 of space, bias 4.0, pg target 0.0009682244168382343 quantized to 16 (current 16)
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:20:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 105 MiB data, 163 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 12 MiB/s wr, 158 op/s
Jan 10 12:20:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 10 12:20:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 10 12:20:44 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 10 12:20:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 10 12:20:45 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 10 12:20:45 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 10 12:20:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 73 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 12 MiB/s wr, 227 op/s
Jan 10 12:20:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 10 12:20:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 10 12:20:46 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 10 12:20:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 10 12:20:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 10 12:20:47 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 10 12:20:48 np0005580781 nova_compute[237049]: 2026-01-10 17:20:48.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:48 np0005580781 nova_compute[237049]: 2026-01-10 17:20:48.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 10 12:20:48 np0005580781 nova_compute[237049]: 2026-01-10 17:20:48.376 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 10 12:20:48 np0005580781 nova_compute[237049]: 2026-01-10 17:20:48.378 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:48 np0005580781 nova_compute[237049]: 2026-01-10 17:20:48.378 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 10 12:20:48 np0005580781 nova_compute[237049]: 2026-01-10 17:20:48.396 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 12 MiB/s wr, 357 op/s
Jan 10 12:20:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 10 12:20:48 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 10 12:20:48 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 10 12:20:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:20:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:20:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:20:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:20:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:20:48.922 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:20:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 10 12:20:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 10 12:20:49 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 10 12:20:50 np0005580781 nova_compute[237049]: 2026-01-10 17:20:50.415 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:50 np0005580781 nova_compute[237049]: 2026-01-10 17:20:50.416 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:20:50 np0005580781 nova_compute[237049]: 2026-01-10 17:20:50.416 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:20:50 np0005580781 nova_compute[237049]: 2026-01-10 17:20:50.438 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:20:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 13 KiB/s wr, 192 op/s
Jan 10 12:20:52 np0005580781 nova_compute[237049]: 2026-01-10 17:20:52.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:52 np0005580781 nova_compute[237049]: 2026-01-10 17:20:52.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:52 np0005580781 nova_compute[237049]: 2026-01-10 17:20:52.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 15 KiB/s wr, 234 op/s
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.374 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.375 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.375 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.376 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.376 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:20:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:20:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534176182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:20:53 np0005580781 nova_compute[237049]: 2026-01-10 17:20:53.906 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.137 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.140 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5251MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.140 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.141 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:20:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 10 12:20:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 10 12:20:54 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.428 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.429 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.535 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing inventories for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 10 12:20:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.3 KiB/s wr, 142 op/s
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.598 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating ProviderTree inventory for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.599 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.619 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing aggregate associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.641 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing trait associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 10 12:20:54 np0005580781 nova_compute[237049]: 2026-01-10 17:20:54.658 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:20:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:20:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872049099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:20:55 np0005580781 nova_compute[237049]: 2026-01-10 17:20:55.275 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:20:55 np0005580781 nova_compute[237049]: 2026-01-10 17:20:55.282 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:20:55 np0005580781 nova_compute[237049]: 2026-01-10 17:20:55.301 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:20:55 np0005580781 nova_compute[237049]: 2026-01-10 17:20:55.302 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:20:55 np0005580781 nova_compute[237049]: 2026-01-10 17:20:55.302 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:20:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 10 12:20:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 10 12:20:55 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 10 12:20:56 np0005580781 nova_compute[237049]: 2026-01-10 17:20:56.302 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:56 np0005580781 nova_compute[237049]: 2026-01-10 17:20:56.303 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:20:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 7.7 KiB/s wr, 139 op/s
Jan 10 12:20:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 6.0 KiB/s wr, 104 op/s
Jan 10 12:20:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:20:59 np0005580781 nova_compute[237049]: 2026-01-10 17:20:59.897 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "6290fedf-9ecb-464c-8d5e-b6af64859702" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:20:59 np0005580781 nova_compute[237049]: 2026-01-10 17:20:59.898 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:20:59 np0005580781 nova_compute[237049]: 2026-01-10 17:20:59.960 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.105 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.106 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.113 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.114 237053 INFO nova.compute.claims [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.385 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 10 12:21:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:21:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3335636917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.926 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.933 237053 DEBUG nova.compute.provider_tree [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.950 237053 DEBUG nova.scheduler.client.report [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.977 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:00 np0005580781 nova_compute[237049]: 2026-01-10 17:21:00.978 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 10 12:21:01 np0005580781 nova_compute[237049]: 2026-01-10 17:21:01.025 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 10 12:21:01 np0005580781 nova_compute[237049]: 2026-01-10 17:21:01.026 237053 DEBUG nova.network.neutron [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 10 12:21:01 np0005580781 nova_compute[237049]: 2026-01-10 17:21:01.058 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 10 12:21:01 np0005580781 nova_compute[237049]: 2026-01-10 17:21:01.081 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 10 12:21:01 np0005580781 nova_compute[237049]: 2026-01-10 17:21:01.120 237053 INFO nova.virt.block_device [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Booting with volume 77e9b8e1-774e-41cc-88ba-d21e1643cb3e at /dev/vda#033[00m
Jan 10 12:21:01 np0005580781 nova_compute[237049]: 2026-01-10 17:21:01.681 237053 DEBUG os_brick.utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 10 12:21:01 np0005580781 nova_compute[237049]: 2026-01-10 17:21:01.683 237053 INFO oslo.privsep.daemon [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpwdcyow_v/privsep.sock']#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.401 237053 INFO oslo.privsep.daemon [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.252 241246 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.257 241246 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.259 241246 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.259 241246 INFO oslo.privsep.daemon [-] privsep daemon running as pid 241246#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.408 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[63eb29c4-8493-4999-a06c-c1e51d930df1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.519 241246 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.538 241246 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.539 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[26762a29-5109-4905-ab2e-6c6be2e08ff6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.541 241246 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.548 237053 DEBUG nova.network.neutron [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.549 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.557 241246 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.557 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[11d26d22-90a2-42c4-826b-48443f4e0bd3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a9da3fcdfda', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.562 241246 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.580 241246 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.581 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[e54c8356-1b07-4a90-81cf-6ba86471bb0c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.584 241246 DEBUG oslo.privsep.daemon [-] privsep: reply[030ab6c7-1178-4c49-8a13-87d9629ce676]: (4, 'a9d7d544-72dd-4b08-9e5e-495057bde287') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.585 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.601 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.604 237053 DEBUG os_brick.initiator.connectors.lightos [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.605 237053 DEBUG os_brick.initiator.connectors.lightos [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.605 237053 DEBUG os_brick.initiator.connectors.lightos [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:3f2d999e-37e2-4333-aca5-637ccade160f dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.606 237053 DEBUG os_brick.utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] <== get_connector_properties: return (923ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a9da3fcdfda', 'do_local_attach': False, 'nvme_hostid': '3f2d999e-37e2-4333-aca5-637ccade160f', 'system uuid': 'a9d7d544-72dd-4b08-9e5e-495057bde287', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:3f2d999e-37e2-4333-aca5-637ccade160f', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 10 12:21:02 np0005580781 nova_compute[237049]: 2026-01-10 17:21:02.606 237053 DEBUG nova.virt.block_device [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Updating existing volume attachment record: f34e40bd-e482-449a-95f8-cbab65899fc7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 10 12:21:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 12:21:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874764883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.293 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.296 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.297 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Creating image(s)#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.298 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.299 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Ensure instance console log exists: /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.299 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.300 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.300 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.305 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'attachment_id': 'f34e40bd-e482-449a-95f8-cbab65899fc7', 'mount_device': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-77e9b8e1-774e-41cc-88ba-d21e1643cb3e', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '77e9b8e1-774e-41cc-88ba-d21e1643cb3e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6290fedf-9ecb-464c-8d5e-b6af64859702', 'attached_at': '', 'detached_at': '', 'volume_id': '77e9b8e1-774e-41cc-88ba-d21e1643cb3e', 'serial': '77e9b8e1-774e-41cc-88ba-d21e1643cb3e'}, 'delete_on_termination': True, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.312 237053 WARNING nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.319 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.320 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.325 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.326 237053 DEBUG nova.virt.libvirt.host [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.327 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.327 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-10T17:19:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='83b4ecee-2b50-47ec-82ec-7f3e1d1624ce',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.328 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.328 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.328 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.329 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.329 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.329 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.330 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.330 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.331 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.331 237053 DEBUG nova.virt.hardware [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.365 237053 DEBUG nova.storage.rbd_utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.369 237053 DEBUG nova.privsep.utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.370 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 10 12:21:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 10 12:21:04 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 10 12:21:04 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:21:04.478 152671 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd6:b5:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8e:56:cf:00:80:b3'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 10 12:21:04 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:21:04.481 152671 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 10 12:21:04 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:21:04.483 152671 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=fbd04e21-7be2-4eb3-a385-03f0bb540a40, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 10 12:21:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Jan 10 12:21:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 12:21:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849157757' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.956 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.958 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.959 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:04 np0005580781 nova_compute[237049]: 2026-01-10 17:21:04.961 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:04 np0005580781 systemd[1]: Starting libvirt secret daemon...
Jan 10 12:21:05 np0005580781 systemd[1]: Started libvirt secret daemon.
Jan 10 12:21:05 np0005580781 nova_compute[237049]: 2026-01-10 17:21:05.061 237053 DEBUG nova.objects.instance [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6290fedf-9ecb-464c-8d5e-b6af64859702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 10 12:21:05 np0005580781 nova_compute[237049]: 2026-01-10 17:21:05.083 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] End _get_guest_xml xml=<domain type="kvm">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <uuid>6290fedf-9ecb-464c-8d5e-b6af64859702</uuid>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <name>instance-00000001</name>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <memory>131072</memory>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <vcpu>1</vcpu>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <metadata>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <nova:name>instance-depend-image</nova:name>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <nova:creationTime>2026-01-10 17:21:04</nova:creationTime>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <nova:flavor name="m1.nano">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <nova:memory>128</nova:memory>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <nova:disk>1</nova:disk>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <nova:swap>0</nova:swap>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <nova:ephemeral>0</nova:ephemeral>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <nova:vcpus>1</nova:vcpus>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      </nova:flavor>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <nova:owner>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <nova:user uuid="75fbaed513e94e80acbf58803e0a4b03">tempest-ImageDependencyTests-1967781085-project-member</nova:user>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <nova:project uuid="0299cbaa071f4ac4b1435e4144bd4d79">tempest-ImageDependencyTests-1967781085</nova:project>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      </nova:owner>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <nova:ports/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </nova:instance>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  </metadata>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <sysinfo type="smbios">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <system>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <entry name="manufacturer">RDO</entry>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <entry name="product">OpenStack Compute</entry>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <entry name="serial">6290fedf-9ecb-464c-8d5e-b6af64859702</entry>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <entry name="uuid">6290fedf-9ecb-464c-8d5e-b6af64859702</entry>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <entry name="family">Virtual Machine</entry>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </system>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  </sysinfo>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <os>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <boot dev="hd"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <smbios mode="sysinfo"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  </os>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <features>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <acpi/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <apic/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <vmcoreinfo/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  </features>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <clock offset="utc">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <timer name="pit" tickpolicy="delay"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <timer name="hpet" present="no"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  </clock>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <cpu mode="host-model" match="exact">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <topology sockets="1" cores="1" threads="1"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  </cpu>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  <devices>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <disk type="network" device="cdrom">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <driver type="raw" cache="none"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <source protocol="rbd" name="vms/6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <host name="192.168.122.100" port="6789"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      </source>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <auth username="openstack">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      </auth>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <target dev="sda" bus="sata"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <disk type="network" device="disk">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <source protocol="rbd" name="volumes/volume-77e9b8e1-774e-41cc-88ba-d21e1643cb3e">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <host name="192.168.122.100" port="6789"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      </source>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <auth username="openstack">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:        <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      </auth>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <target dev="vda" bus="virtio"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <serial>77e9b8e1-774e-41cc-88ba-d21e1643cb3e</serial>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <serial type="pty">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <log file="/var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/console.log" append="off"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </serial>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <video>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <model type="virtio"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </video>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <input type="tablet" bus="usb"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <rng model="virtio">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <backend model="random">/dev/urandom</backend>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </rng>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <controller type="usb" index="0"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    <memballoon model="virtio">
Jan 10 12:21:05 np0005580781 nova_compute[237049]:      <stats period="10"/>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:    </memballoon>
Jan 10 12:21:05 np0005580781 nova_compute[237049]:  </devices>
Jan 10 12:21:05 np0005580781 nova_compute[237049]: </domain>
Jan 10 12:21:05 np0005580781 nova_compute[237049]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 10 12:21:05 np0005580781 nova_compute[237049]: 2026-01-10 17:21:05.159 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 10 12:21:05 np0005580781 nova_compute[237049]: 2026-01-10 17:21:05.159 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 10 12:21:05 np0005580781 nova_compute[237049]: 2026-01-10 17:21:05.160 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Using config drive#033[00m
Jan 10 12:21:05 np0005580781 nova_compute[237049]: 2026-01-10 17:21:05.192 237053 DEBUG nova.storage.rbd_utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:06 np0005580781 podman[241333]: 2026-01-10 17:21:06.092633553 +0000 UTC m=+0.088764386 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:21:06 np0005580781 podman[241332]: 2026-01-10 17:21:06.093914599 +0000 UTC m=+0.086061732 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Jan 10 12:21:06 np0005580781 nova_compute[237049]: 2026-01-10 17:21:06.345 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Creating config drive at /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config#033[00m
Jan 10 12:21:06 np0005580781 nova_compute[237049]: 2026-01-10 17:21:06.355 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gneuq0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:06 np0005580781 nova_compute[237049]: 2026-01-10 17:21:06.495 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gneuq0" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:06 np0005580781 nova_compute[237049]: 2026-01-10 17:21:06.531 237053 DEBUG nova.storage.rbd_utils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:06 np0005580781 nova_compute[237049]: 2026-01-10 17:21:06.536 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 1023 B/s wr, 9 op/s
Jan 10 12:21:07 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 10 12:21:07 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 10 12:21:07 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 10 12:21:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:21:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 10 12:21:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 10 12:21:08 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 10 12:21:08 np0005580781 nova_compute[237049]: 2026-01-10 17:21:08.834 237053 DEBUG oslo_concurrency.processutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config 6290fedf-9ecb-464c-8d5e-b6af64859702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:08 np0005580781 nova_compute[237049]: 2026-01-10 17:21:08.835 237053 INFO nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deleting local config drive /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702/disk.config because it was imported into RBD.#033[00m
Jan 10 12:21:08 np0005580781 systemd-machined[205102]: New machine qemu-1-instance-00000001.
Jan 10 12:21:08 np0005580781 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 10 12:21:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:21:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:21:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:21:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:21:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:21:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:21:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.862 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065669.8620079, 6290fedf-9ecb-464c-8d5e-b6af64859702 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.864 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] VM Resumed (Lifecycle Event)#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.868 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.869 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.873 237053 INFO nova.virt.libvirt.driver [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance spawned successfully.#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.874 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.923 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.928 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.952 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.956 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.958 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.959 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.960 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.961 237053 DEBUG nova.virt.libvirt.driver [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.969 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.970 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065669.864454, 6290fedf-9ecb-464c-8d5e-b6af64859702 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 10 12:21:09 np0005580781 nova_compute[237049]: 2026-01-10 17:21:09.971 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] VM Started (Lifecycle Event)#033[00m
Jan 10 12:21:10 np0005580781 nova_compute[237049]: 2026-01-10 17:21:10.066 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:10 np0005580781 nova_compute[237049]: 2026-01-10 17:21:10.071 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 10 12:21:10 np0005580781 nova_compute[237049]: 2026-01-10 17:21:10.080 237053 INFO nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 5.79 seconds to spawn the instance on the hypervisor.#033[00m
Jan 10 12:21:10 np0005580781 nova_compute[237049]: 2026-01-10 17:21:10.082 237053 DEBUG nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:10 np0005580781 nova_compute[237049]: 2026-01-10 17:21:10.102 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 10 12:21:10 np0005580781 nova_compute[237049]: 2026-01-10 17:21:10.164 237053 INFO nova.compute.manager [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 10.10 seconds to build instance.#033[00m
Jan 10 12:21:10 np0005580781 nova_compute[237049]: 2026-01-10 17:21:10.187 237053 DEBUG oslo_concurrency.lockutils [None req-4ac3dbbe-76c4-4790-871b-531b7ff544d3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 827 B/s rd, 23 KiB/s wr, 0 op/s
Jan 10 12:21:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 19 KiB/s wr, 19 op/s
Jan 10 12:21:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 10 12:21:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 10 12:21:13 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 10 12:21:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 10 12:21:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 10 12:21:14 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 10 12:21:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 25 op/s
Jan 10 12:21:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 10 12:21:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 10 12:21:16 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 10 12:21:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.3 KiB/s wr, 96 op/s
Jan 10 12:21:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 10 12:21:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 10 12:21:17 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 10 12:21:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 10 12:21:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 10 12:21:18 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 10 12:21:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 171 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.5 KiB/s wr, 145 op/s
Jan 10 12:21:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 171 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.0 KiB/s wr, 129 op/s
Jan 10 12:21:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.8 KiB/s wr, 92 op/s
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.029 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.029 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.051 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.161 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.161 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.169 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.169 237053 INFO nova.compute.claims [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.319 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:23 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:21:23 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003498910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.872 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.881 237053 DEBUG nova.compute.provider_tree [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.905 237053 DEBUG nova.scheduler.client.report [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.940 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.942 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.993 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 10 12:21:23 np0005580781 nova_compute[237049]: 2026-01-10 17:21:23.994 237053 DEBUG nova.network.neutron [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.018 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.038 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.117 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.118 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.119 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Creating image(s)#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.150 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.186 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.221 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.226 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "1a00580aebdcff88afc7729ad1595e2017e01a34" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.227 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "1a00580aebdcff88afc7729ad1595e2017e01a34" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 10 12:21:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 10 12:21:24 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.518 237053 DEBUG nova.virt.libvirt.imagebackend [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image locations are: [{'url': 'rbd://a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/images/debf2853-94a6-4539-86f8-a9fe443a47cc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/images/debf2853-94a6-4539-86f8-a9fe443a47cc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 10 12:21:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 5.0 KiB/s wr, 80 op/s
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.591 237053 DEBUG nova.network.neutron [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.592 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.596 237053 DEBUG nova.virt.libvirt.imagebackend [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Selected location: {'url': 'rbd://a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4/images/debf2853-94a6-4539-86f8-a9fe443a47cc/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.597 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] cloning images/debf2853-94a6-4539-86f8-a9fe443a47cc@snap to None/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.722 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "1a00580aebdcff88afc7729ad1595e2017e01a34" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.900 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] resizing rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 10 12:21:24 np0005580781 nova_compute[237049]: 2026-01-10 17:21:24.998 237053 DEBUG nova.objects.instance [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'migration_context' on Instance uuid 114a4603-17a5-4e6b-b2d6-c77ef324a07d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.037 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.038 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Ensure instance console log exists: /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.038 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.039 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.040 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.042 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='a55840e5637d8193bf5f45ed86d227d1',container_format='bare',created_at=2026-01-10T17:21:17Z,direct_url=<?>,disk_format='raw',id=debf2853-94a6-4539-86f8-a9fe443a47cc,min_disk=0,min_ram=0,name='tempest-image-dependency-test-105931288',owner='0299cbaa071f4ac4b1435e4144bd4d79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-10T17:21:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'size': 0, 'device_name': '/dev/vda', 'encrypted': False, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'device_type': 'disk', 'image_id': 'debf2853-94a6-4539-86f8-a9fe443a47cc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.049 237053 WARNING nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.056 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.057 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.061 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.062 237053 DEBUG nova.virt.libvirt.host [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.063 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.063 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-10T17:19:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='83b4ecee-2b50-47ec-82ec-7f3e1d1624ce',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='a55840e5637d8193bf5f45ed86d227d1',container_format='bare',created_at=2026-01-10T17:21:17Z,direct_url=<?>,disk_format='raw',id=debf2853-94a6-4539-86f8-a9fe443a47cc,min_disk=0,min_ram=0,name='tempest-image-dependency-test-105931288',owner='0299cbaa071f4ac4b1435e4144bd4d79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-10T17:21:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.064 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.064 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.065 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.065 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.066 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.066 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.067 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.067 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.068 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.068 237053 DEBUG nova.virt.hardware [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.073 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 12:21:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211419263' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.689 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.723 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:25 np0005580781 nova_compute[237049]: 2026-01-10 17:21:25.729 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:26 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 10 12:21:26 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967516454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.307 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.311 237053 DEBUG nova.objects.instance [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 114a4603-17a5-4e6b-b2d6-c77ef324a07d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.332 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] End _get_guest_xml xml=<domain type="kvm">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <uuid>114a4603-17a5-4e6b-b2d6-c77ef324a07d</uuid>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <name>instance-00000002</name>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <memory>131072</memory>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <vcpu>1</vcpu>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <metadata>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <nova:name>instance-depend-image</nova:name>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <nova:creationTime>2026-01-10 17:21:25</nova:creationTime>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <nova:flavor name="m1.nano">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <nova:memory>128</nova:memory>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <nova:disk>1</nova:disk>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <nova:swap>0</nova:swap>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <nova:ephemeral>0</nova:ephemeral>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <nova:vcpus>1</nova:vcpus>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      </nova:flavor>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <nova:owner>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <nova:user uuid="75fbaed513e94e80acbf58803e0a4b03">tempest-ImageDependencyTests-1967781085-project-member</nova:user>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <nova:project uuid="0299cbaa071f4ac4b1435e4144bd4d79">tempest-ImageDependencyTests-1967781085</nova:project>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      </nova:owner>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <nova:root type="image" uuid="debf2853-94a6-4539-86f8-a9fe443a47cc"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <nova:ports/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </nova:instance>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  </metadata>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <sysinfo type="smbios">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <system>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <entry name="manufacturer">RDO</entry>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <entry name="product">OpenStack Compute</entry>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <entry name="serial">114a4603-17a5-4e6b-b2d6-c77ef324a07d</entry>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <entry name="uuid">114a4603-17a5-4e6b-b2d6-c77ef324a07d</entry>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <entry name="family">Virtual Machine</entry>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </system>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  </sysinfo>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <os>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <boot dev="hd"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <smbios mode="sysinfo"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  </os>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <features>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <acpi/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <apic/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <vmcoreinfo/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  </features>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <clock offset="utc">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <timer name="pit" tickpolicy="delay"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <timer name="hpet" present="no"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  </clock>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <cpu mode="host-model" match="exact">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <topology sockets="1" cores="1" threads="1"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  </cpu>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  <devices>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <disk type="network" device="disk">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <driver type="raw" cache="none"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <source protocol="rbd" name="vms/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <host name="192.168.122.100" port="6789"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      </source>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <auth username="openstack">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      </auth>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <target dev="vda" bus="virtio"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <disk type="network" device="cdrom">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <driver type="raw" cache="none"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <source protocol="rbd" name="vms/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <host name="192.168.122.100" port="6789"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      </source>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <auth username="openstack">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:        <secret type="ceph" uuid="a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      </auth>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <target dev="sda" bus="sata"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </disk>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <serial type="pty">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <log file="/var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/console.log" append="off"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </serial>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <video>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <model type="virtio"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </video>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <input type="tablet" bus="usb"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <rng model="virtio">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <backend model="random">/dev/urandom</backend>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </rng>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="pci" model="pcie-root-port"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <controller type="usb" index="0"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    <memballoon model="virtio">
Jan 10 12:21:26 np0005580781 nova_compute[237049]:      <stats period="10"/>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:    </memballoon>
Jan 10 12:21:26 np0005580781 nova_compute[237049]:  </devices>
Jan 10 12:21:26 np0005580781 nova_compute[237049]: </domain>
Jan 10 12:21:26 np0005580781 nova_compute[237049]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.393 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.393 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.393 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Using config drive#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.418 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.578 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Creating config drive at /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config#033[00m
Jan 10 12:21:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v810: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.5 KiB/s wr, 98 op/s
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.588 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8bbnkfj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.729 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8bbnkfj" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.774 237053 DEBUG nova.storage.rbd_utils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] rbd image 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.780 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.960 237053 DEBUG oslo_concurrency.processutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config 114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:26 np0005580781 nova_compute[237049]: 2026-01-10 17:21:26.962 237053 INFO nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deleting local config drive /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d/disk.config because it was imported into RBD.#033[00m
Jan 10 12:21:27 np0005580781 systemd-machined[205102]: New machine qemu-2-instance-00000002.
Jan 10 12:21:27 np0005580781 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.784 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065687.7838073, 114a4603-17a5-4e6b-b2d6-c77ef324a07d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.786 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] VM Resumed (Lifecycle Event)#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.790 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.791 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.798 237053 INFO nova.virt.libvirt.driver [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance spawned successfully.#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.798 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.837 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.846 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.852 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.853 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.854 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.854 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.855 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.856 237053 DEBUG nova.virt.libvirt.driver [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.907 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.907 237053 DEBUG nova.virt.driver [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] Emitting event <LifecycleEvent: 1768065687.7853625, 114a4603-17a5-4e6b-b2d6-c77ef324a07d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.908 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] VM Started (Lifecycle Event)#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.943 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.948 237053 DEBUG nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.957 237053 INFO nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 3.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.958 237053 DEBUG nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:27 np0005580781 nova_compute[237049]: 2026-01-10 17:21:27.973 237053 INFO nova.compute.manager [None req-e8bffe3c-fd35-435d-be63-f593c81b47ca - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 10 12:21:28 np0005580781 nova_compute[237049]: 2026-01-10 17:21:28.015 237053 INFO nova.compute.manager [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 4.89 seconds to build instance.#033[00m
Jan 10 12:21:28 np0005580781 nova_compute[237049]: 2026-01-10 17:21:28.036 237053 DEBUG oslo_concurrency.lockutils [None req-0ff72c36-a410-4b02-8559-fd571b216556 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 18 KiB/s wr, 83 op/s
Jan 10 12:21:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v812: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 17 KiB/s wr, 72 op/s
Jan 10 12:21:30 np0005580781 nova_compute[237049]: 2026-01-10 17:21:30.937 237053 DEBUG nova.compute.manager [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:30 np0005580781 nova_compute[237049]: 2026-01-10 17:21:30.999 237053 INFO nova.compute.manager [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] instance snapshotting#033[00m
Jan 10 12:21:31 np0005580781 nova_compute[237049]: 2026-01-10 17:21:31.279 237053 INFO nova.virt.libvirt.driver [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Beginning live snapshot process#033[00m
Jan 10 12:21:31 np0005580781 nova_compute[237049]: 2026-01-10 17:21:31.473 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] creating snapshot(0e0fcb56a68946158c679c3e8fa00004) on rbd image(114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 10 12:21:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 10 12:21:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 10 12:21:31 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 10 12:21:31 np0005580781 nova_compute[237049]: 2026-01-10 17:21:31.762 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] cloning vms/114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk@0e0fcb56a68946158c679c3e8fa00004 to images/54dadfaa-d0a5-471e-9ebf-65a4699f0e55 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 10 12:21:31 np0005580781 nova_compute[237049]: 2026-01-10 17:21:31.899 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] flattening images/54dadfaa-d0a5-471e-9ebf-65a4699f0e55 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 10 12:21:32 np0005580781 nova_compute[237049]: 2026-01-10 17:21:32.355 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] removing snapshot(0e0fcb56a68946158c679c3e8fa00004) on rbd image(114a4603-17a5-4e6b-b2d6-c77ef324a07d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 10 12:21:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 69 op/s
Jan 10 12:21:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 10 12:21:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 10 12:21:32 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 10 12:21:32 np0005580781 nova_compute[237049]: 2026-01-10 17:21:32.802 237053 DEBUG nova.storage.rbd_utils [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] creating snapshot(snap) on rbd image(54dadfaa-d0a5-471e-9ebf-65a4699f0e55) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 10 12:21:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 10 12:21:33 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 10 12:21:33 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 10 12:21:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 682 B/s wr, 20 op/s
Jan 10 12:21:35 np0005580781 nova_compute[237049]: 2026-01-10 17:21:35.240 237053 INFO nova.virt.libvirt.driver [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Snapshot image upload complete#033[00m
Jan 10 12:21:35 np0005580781 nova_compute[237049]: 2026-01-10 17:21:35.241 237053 INFO nova.compute.manager [None req-02e78c99-790d-4bd5-a244-f9c8c8d98a40 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 4.24 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 10 12:21:36 np0005580781 podman[242107]: 2026-01-10 17:21:36.016345628 +0000 UTC m=+0.084091011 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:21:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:21:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1901931967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:21:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:21:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1901931967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:21:36 np0005580781 podman[242107]: 2026-01-10 17:21:36.121897531 +0000 UTC m=+0.189642874 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:21:36 np0005580781 podman[242147]: 2026-01-10 17:21:36.368498534 +0000 UTC m=+0.147972906 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 12:21:36 np0005580781 podman[242151]: 2026-01-10 17:21:36.397726336 +0000 UTC m=+0.176310611 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 10 12:21:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 5.2 KiB/s wr, 142 op/s
Jan 10 12:21:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 10 12:21:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 10 12:21:36 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.436 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.437 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.438 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.438 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.439 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.441 237053 INFO nova.compute.manager [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Terminating instance#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.443 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "refresh_cache-114a4603-17a5-4e6b-b2d6-c77ef324a07d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.443 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquired lock "refresh_cache-114a4603-17a5-4e6b-b2d6-c77ef324a07d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.444 237053 DEBUG nova.network.neutron [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:21:37 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:21:37 np0005580781 nova_compute[237049]: 2026-01-10 17:21:37.942 237053 DEBUG nova.network.neutron [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 10 12:21:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:21:38
Jan 10 12:21:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:21:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:21:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta']
Jan 10 12:21:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:21:38 np0005580781 nova_compute[237049]: 2026-01-10 17:21:38.221 237053 DEBUG nova.network.neutron [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 10 12:21:38 np0005580781 nova_compute[237049]: 2026-01-10 17:21:38.235 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Releasing lock "refresh_cache-114a4603-17a5-4e6b-b2d6-c77ef324a07d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 10 12:21:38 np0005580781 nova_compute[237049]: 2026-01-10 17:21:38.236 237053 DEBUG nova.compute.manager [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 10 12:21:38 np0005580781 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 10 12:21:38 np0005580781 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 1.179s CPU time.
Jan 10 12:21:38 np0005580781 systemd-machined[205102]: Machine qemu-2-instance-00000002 terminated.
Jan 10 12:21:38 np0005580781 podman[242458]: 2026-01-10 17:21:38.393538017 +0000 UTC m=+0.056189208 container create 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:21:38 np0005580781 systemd[1]: Started libpod-conmon-5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a.scope.
Jan 10 12:21:38 np0005580781 podman[242458]: 2026-01-10 17:21:38.365116659 +0000 UTC m=+0.027767910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:21:38 np0005580781 nova_compute[237049]: 2026-01-10 17:21:38.464 237053 INFO nova.virt.libvirt.driver [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance destroyed successfully.#033[00m
Jan 10 12:21:38 np0005580781 nova_compute[237049]: 2026-01-10 17:21:38.465 237053 DEBUG nova.objects.instance [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'resources' on Instance uuid 114a4603-17a5-4e6b-b2d6-c77ef324a07d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 10 12:21:38 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:21:38 np0005580781 podman[242458]: 2026-01-10 17:21:38.504116508 +0000 UTC m=+0.166767779 container init 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:21:38 np0005580781 podman[242458]: 2026-01-10 17:21:38.515492254 +0000 UTC m=+0.178143445 container start 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 10 12:21:38 np0005580781 podman[242458]: 2026-01-10 17:21:38.52011867 +0000 UTC m=+0.182769881 container attach 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:21:38 np0005580781 elegant_chebyshev[242475]: 167 167
Jan 10 12:21:38 np0005580781 systemd[1]: libpod-5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a.scope: Deactivated successfully.
Jan 10 12:21:38 np0005580781 podman[242458]: 2026-01-10 17:21:38.530809745 +0000 UTC m=+0.193460916 container died 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 10 12:21:38 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9ac9627a80ce9e05d4199ade9d2d8212fde8054f2f760ac26968d358cc5b8e9b-merged.mount: Deactivated successfully.
Jan 10 12:21:38 np0005580781 podman[242458]: 2026-01-10 17:21:38.576623656 +0000 UTC m=+0.239274867 container remove 5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 10 12:21:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.8 KiB/s wr, 132 op/s
Jan 10 12:21:38 np0005580781 systemd[1]: libpod-conmon-5e866af974d52a602bde1168dcd4f4ade35399db87989d76a72b51d963abd66a.scope: Deactivated successfully.
Jan 10 12:21:38 np0005580781 podman[242520]: 2026-01-10 17:21:38.740413857 +0000 UTC m=+0.043503074 container create ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:21:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:21:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:21:38 np0005580781 systemd[1]: Started libpod-conmon-ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef.scope.
Jan 10 12:21:38 np0005580781 podman[242520]: 2026-01-10 17:21:38.720404967 +0000 UTC m=+0.023494224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:21:38 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:21:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:38 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:38 np0005580781 podman[242520]: 2026-01-10 17:21:38.867781764 +0000 UTC m=+0.170871061 container init ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 12:21:38 np0005580781 podman[242520]: 2026-01-10 17:21:38.878895661 +0000 UTC m=+0.181984888 container start ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 12:21:38 np0005580781 podman[242520]: 2026-01-10 17:21:38.883640251 +0000 UTC m=+0.186729488 container attach ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 12:21:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 10 12:21:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 10 12:21:38 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.098 237053 INFO nova.virt.libvirt.driver [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deleting instance files /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d_del#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.100 237053 INFO nova.virt.libvirt.driver [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deletion of /var/lib/nova/instances/114a4603-17a5-4e6b-b2d6-c77ef324a07d_del complete#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.172 237053 DEBUG nova.virt.libvirt.host [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.173 237053 INFO nova.virt.libvirt.host [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] UEFI support detected#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.175 237053 INFO nova.compute.manager [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.176 237053 DEBUG oslo.service.loopingcall [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.177 237053 DEBUG nova.compute.manager [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.177 237053 DEBUG nova.network.neutron [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:21:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:21:39 np0005580781 loving_meitner[242537]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:21:39 np0005580781 loving_meitner[242537]: --> All data devices are unavailable
Jan 10 12:21:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 10 12:21:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 10 12:21:39 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 10 12:21:39 np0005580781 systemd[1]: libpod-ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef.scope: Deactivated successfully.
Jan 10 12:21:39 np0005580781 podman[242520]: 2026-01-10 17:21:39.417679002 +0000 UTC m=+0.720768239 container died ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 12:21:39 np0005580781 systemd[1]: var-lib-containers-storage-overlay-abe3108b175475fa85651cf73b196f1dfd100223ad33e12304d42e9f0594b4c8-merged.mount: Deactivated successfully.
Jan 10 12:21:39 np0005580781 podman[242520]: 2026-01-10 17:21:39.472001484 +0000 UTC m=+0.775090701 container remove ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 12:21:39 np0005580781 systemd[1]: libpod-conmon-ebaa469a032c68c31ecaa7e2cf881afd820b08f2e58228c3fd0fabda70e98cef.scope: Deactivated successfully.
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.492 237053 DEBUG nova.network.neutron [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.520 237053 DEBUG nova.network.neutron [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.547 237053 INFO nova.compute.manager [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Took 0.37 seconds to deallocate network for instance.#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.619 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.620 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:39 np0005580781 nova_compute[237049]: 2026-01-10 17:21:39.703 237053 DEBUG oslo_concurrency.processutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:39 np0005580781 podman[242652]: 2026-01-10 17:21:39.956037469 +0000 UTC m=+0.039863006 container create 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:21:40 np0005580781 systemd[1]: Started libpod-conmon-5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5.scope.
Jan 10 12:21:40 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:21:40 np0005580781 podman[242652]: 2026-01-10 17:21:39.936530134 +0000 UTC m=+0.020355651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:21:40 np0005580781 podman[242652]: 2026-01-10 17:21:40.048673601 +0000 UTC m=+0.132499188 container init 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 12:21:40 np0005580781 podman[242652]: 2026-01-10 17:21:40.060951233 +0000 UTC m=+0.144776760 container start 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 10 12:21:40 np0005580781 podman[242652]: 2026-01-10 17:21:40.065523478 +0000 UTC m=+0.149349015 container attach 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:21:40 np0005580781 thirsty_mclean[242668]: 167 167
Jan 10 12:21:40 np0005580781 systemd[1]: libpod-5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5.scope: Deactivated successfully.
Jan 10 12:21:40 np0005580781 podman[242652]: 2026-01-10 17:21:40.069419963 +0000 UTC m=+0.153245460 container died 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 10 12:21:40 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ca18b563b63b57e64a640a7310f9c598bba6e35d45c783dfa150dc6b1dfd0ee4-merged.mount: Deactivated successfully.
Jan 10 12:21:40 np0005580781 podman[242652]: 2026-01-10 17:21:40.111767172 +0000 UTC m=+0.195592679 container remove 5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 12:21:40 np0005580781 systemd[1]: libpod-conmon-5a1626f8900ce4986f91a4eee685a00f2bb49fef1c51730345455c56bb5828e5.scope: Deactivated successfully.
Jan 10 12:21:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:21:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2679230637' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:21:40 np0005580781 nova_compute[237049]: 2026-01-10 17:21:40.258 237053 DEBUG oslo_concurrency.processutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:40 np0005580781 nova_compute[237049]: 2026-01-10 17:21:40.267 237053 DEBUG nova.compute.provider_tree [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:21:40 np0005580781 nova_compute[237049]: 2026-01-10 17:21:40.321 237053 DEBUG nova.scheduler.client.report [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:21:40 np0005580781 podman[242694]: 2026-01-10 17:21:40.337635574 +0000 UTC m=+0.048174642 container create 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:21:40 np0005580781 nova_compute[237049]: 2026-01-10 17:21:40.376 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:40 np0005580781 systemd[1]: Started libpod-conmon-3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15.scope.
Jan 10 12:21:40 np0005580781 podman[242694]: 2026-01-10 17:21:40.313985786 +0000 UTC m=+0.024524834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:21:40 np0005580781 nova_compute[237049]: 2026-01-10 17:21:40.421 237053 INFO nova.scheduler.client.report [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Deleted allocations for instance 114a4603-17a5-4e6b-b2d6-c77ef324a07d#033[00m
Jan 10 12:21:40 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:21:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:40 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:40 np0005580781 podman[242694]: 2026-01-10 17:21:40.459665023 +0000 UTC m=+0.170204161 container init 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:21:40 np0005580781 podman[242694]: 2026-01-10 17:21:40.47246984 +0000 UTC m=+0.183008908 container start 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 12:21:40 np0005580781 podman[242694]: 2026-01-10 17:21:40.478048715 +0000 UTC m=+0.188587853 container attach 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 12:21:40 np0005580781 nova_compute[237049]: 2026-01-10 17:21:40.528 237053 DEBUG oslo_concurrency.lockutils [None req-0cc8dfa5-ebce-444d-ac7a-56cb838145c3 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "114a4603-17a5-4e6b-b2d6-c77ef324a07d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 9.8 KiB/s wr, 219 op/s
Jan 10 12:21:40 np0005580781 happy_volhard[242711]: {
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:    "0": [
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:        {
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "devices": [
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "/dev/loop3"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            ],
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_name": "ceph_lv0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_size": "21470642176",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "name": "ceph_lv0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "tags": {
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cluster_name": "ceph",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.crush_device_class": "",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.encrypted": "0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.objectstore": "bluestore",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osd_id": "0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.type": "block",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.vdo": "0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.with_tpm": "0"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            },
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "type": "block",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "vg_name": "ceph_vg0"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:        }
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:    ],
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:    "1": [
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:        {
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "devices": [
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "/dev/loop4"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            ],
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_name": "ceph_lv1",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_size": "21470642176",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "name": "ceph_lv1",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "tags": {
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cluster_name": "ceph",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.crush_device_class": "",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.encrypted": "0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.objectstore": "bluestore",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osd_id": "1",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.type": "block",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.vdo": "0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.with_tpm": "0"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            },
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "type": "block",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "vg_name": "ceph_vg1"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:        }
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:    ],
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:    "2": [
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:        {
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "devices": [
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "/dev/loop5"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            ],
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_name": "ceph_lv2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_size": "21470642176",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "name": "ceph_lv2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "tags": {
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.cluster_name": "ceph",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.crush_device_class": "",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.encrypted": "0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.objectstore": "bluestore",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osd_id": "2",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.type": "block",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.vdo": "0",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:                "ceph.with_tpm": "0"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            },
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "type": "block",
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:            "vg_name": "ceph_vg2"
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:        }
Jan 10 12:21:40 np0005580781 happy_volhard[242711]:    ]
Jan 10 12:21:40 np0005580781 happy_volhard[242711]: }
Jan 10 12:21:40 np0005580781 systemd[1]: libpod-3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15.scope: Deactivated successfully.
Jan 10 12:21:40 np0005580781 podman[242694]: 2026-01-10 17:21:40.858001921 +0000 UTC m=+0.568540989 container died 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:21:40 np0005580781 systemd[1]: var-lib-containers-storage-overlay-78f34f70b3ddc92e54872b8170f51e7afacf11a37c0d93c259498770c803a123-merged.mount: Deactivated successfully.
Jan 10 12:21:40 np0005580781 podman[242694]: 2026-01-10 17:21:40.909665094 +0000 UTC m=+0.620204122 container remove 3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:21:40 np0005580781 systemd[1]: libpod-conmon-3d4be6da4b2d7bd91143bb5212234053d30d5df328d647582be1898f61becc15.scope: Deactivated successfully.
Jan 10 12:21:40 np0005580781 nova_compute[237049]: 2026-01-10 17:21:40.998 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "6290fedf-9ecb-464c-8d5e-b6af64859702" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.000 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.000 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "6290fedf-9ecb-464c-8d5e-b6af64859702-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.001 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.001 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.004 237053 INFO nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Terminating instance#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.006 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "refresh_cache-6290fedf-9ecb-464c-8d5e-b6af64859702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.006 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquired lock "refresh_cache-6290fedf-9ecb-464c-8d5e-b6af64859702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.007 237053 DEBUG nova.network.neutron [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.414 237053 DEBUG nova.network.neutron [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 10 12:21:41 np0005580781 podman[242793]: 2026-01-10 17:21:41.416108791 +0000 UTC m=+0.056167108 container create 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:21:41 np0005580781 systemd[1]: Started libpod-conmon-0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565.scope.
Jan 10 12:21:41 np0005580781 podman[242793]: 2026-01-10 17:21:41.388250959 +0000 UTC m=+0.028309326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:21:41 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:21:41 np0005580781 podman[242793]: 2026-01-10 17:21:41.519129488 +0000 UTC m=+0.159187835 container init 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 10 12:21:41 np0005580781 podman[242793]: 2026-01-10 17:21:41.528974479 +0000 UTC m=+0.169032806 container start 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 10 12:21:41 np0005580781 podman[242793]: 2026-01-10 17:21:41.534032548 +0000 UTC m=+0.174090865 container attach 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:21:41 np0005580781 affectionate_northcutt[242809]: 167 167
Jan 10 12:21:41 np0005580781 systemd[1]: libpod-0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565.scope: Deactivated successfully.
Jan 10 12:21:41 np0005580781 podman[242793]: 2026-01-10 17:21:41.536179481 +0000 UTC m=+0.176237798 container died 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:21:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay-9604867bab8c8850292d061b4f0f68ec5afbaadc360a8c7c8e43cf572bf00334-merged.mount: Deactivated successfully.
Jan 10 12:21:41 np0005580781 podman[242793]: 2026-01-10 17:21:41.5927641 +0000 UTC m=+0.232822407 container remove 0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:21:41 np0005580781 systemd[1]: libpod-conmon-0cd9c60de812217d2fc22dc5a2bd6decfba871e0e0ce5cdd133d82d8089dc565.scope: Deactivated successfully.
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.646 237053 DEBUG nova.network.neutron [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.664 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Releasing lock "refresh_cache-6290fedf-9ecb-464c-8d5e-b6af64859702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.665 237053 DEBUG nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 10 12:21:41 np0005580781 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 10 12:21:41 np0005580781 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.658s CPU time.
Jan 10 12:21:41 np0005580781 systemd-machined[205102]: Machine qemu-1-instance-00000001 terminated.
Jan 10 12:21:41 np0005580781 podman[242833]: 2026-01-10 17:21:41.836739756 +0000 UTC m=+0.051054037 container create 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 10 12:21:41 np0005580781 systemd[1]: Started libpod-conmon-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope.
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.893 237053 INFO nova.virt.libvirt.driver [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance destroyed successfully.#033[00m
Jan 10 12:21:41 np0005580781 nova_compute[237049]: 2026-01-10 17:21:41.894 237053 DEBUG nova.objects.instance [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lazy-loading 'resources' on Instance uuid 6290fedf-9ecb-464c-8d5e-b6af64859702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 10 12:21:41 np0005580781 podman[242833]: 2026-01-10 17:21:41.813990905 +0000 UTC m=+0.028305226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:21:41 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:21:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:21:41 np0005580781 podman[242833]: 2026-01-10 17:21:41.937194608 +0000 UTC m=+0.151508899 container init 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:21:41 np0005580781 podman[242833]: 2026-01-10 17:21:41.947685778 +0000 UTC m=+0.162000059 container start 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 12:21:41 np0005580781 podman[242833]: 2026-01-10 17:21:41.951314175 +0000 UTC m=+0.165628466 container attach 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.075 237053 INFO nova.virt.libvirt.driver [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deleting instance files /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702_del#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.076 237053 INFO nova.virt.libvirt.driver [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deletion of /var/lib/nova/instances/6290fedf-9ecb-464c-8d5e-b6af64859702_del complete#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.134 237053 INFO nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.135 237053 DEBUG oslo.service.loopingcall [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.136 237053 DEBUG nova.compute.manager [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.136 237053 DEBUG nova.network.neutron [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.357 237053 DEBUG nova.network.neutron [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.386 237053 DEBUG nova.network.neutron [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.399 237053 INFO nova.compute.manager [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 0.26 seconds to deallocate network for instance.#033[00m
Jan 10 12:21:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.7 KiB/s wr, 140 op/s
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.621 237053 INFO nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.623 237053 DEBUG nova.compute.manager [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Deleting volume: 77e9b8e1-774e-41cc-88ba-d21e1643cb3e _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 10 12:21:42 np0005580781 lvm[242945]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:21:42 np0005580781 lvm[242945]: VG ceph_vg0 finished
Jan 10 12:21:42 np0005580781 lvm[242948]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:21:42 np0005580781 lvm[242948]: VG ceph_vg1 finished
Jan 10 12:21:42 np0005580781 lvm[242950]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:21:42 np0005580781 lvm[242950]: VG ceph_vg2 finished
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.789 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.790 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:42 np0005580781 funny_zhukovsky[242850]: {}
Jan 10 12:21:42 np0005580781 nova_compute[237049]: 2026-01-10 17:21:42.827 237053 DEBUG oslo_concurrency.processutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:42 np0005580781 systemd[1]: libpod-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope: Deactivated successfully.
Jan 10 12:21:42 np0005580781 podman[242833]: 2026-01-10 17:21:42.871770031 +0000 UTC m=+1.086084342 container died 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 10 12:21:42 np0005580781 systemd[1]: libpod-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope: Consumed 1.451s CPU time.
Jan 10 12:21:42 np0005580781 systemd[1]: var-lib-containers-storage-overlay-16a37537895a39139185e9b74598e135fda086578f0b8a635155ed9dbcfee84d-merged.mount: Deactivated successfully.
Jan 10 12:21:42 np0005580781 podman[242833]: 2026-01-10 17:21:42.929098542 +0000 UTC m=+1.143412853 container remove 3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 10 12:21:42 np0005580781 systemd[1]: libpod-conmon-3b887bd9a01daca623e4572c31ae9e354931cd69cd91baecfc28ac3829f3aa46.scope: Deactivated successfully.
Jan 10 12:21:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:21:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:42 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:21:42 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:21:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703407448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:21:43 np0005580781 nova_compute[237049]: 2026-01-10 17:21:43.432 237053 DEBUG oslo_concurrency.processutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:43 np0005580781 nova_compute[237049]: 2026-01-10 17:21:43.442 237053 DEBUG nova.compute.provider_tree [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:21:43 np0005580781 nova_compute[237049]: 2026-01-10 17:21:43.470 237053 DEBUG nova.scheduler.client.report [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:21:43 np0005580781 nova_compute[237049]: 2026-01-10 17:21:43.497 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:43 np0005580781 nova_compute[237049]: 2026-01-10 17:21:43.526 237053 INFO nova.scheduler.client.report [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Deleted allocations for instance 6290fedf-9ecb-464c-8d5e-b6af64859702#033[00m
Jan 10 12:21:43 np0005580781 nova_compute[237049]: 2026-01-10 17:21:43.591 237053 DEBUG oslo_concurrency.lockutils [None req-49a8e484-13d1-48ec-8c87-792f6e967cb2 75fbaed513e94e80acbf58803e0a4b03 0299cbaa071f4ac4b1435e4144bd4d79 - - default default] Lock "6290fedf-9ecb-464c-8d5e-b6af64859702" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2941004804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2941004804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 10 12:21:44 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.7939546245642764e-06 of space, bias 1.0, pg target 0.0008381863873692829 quantized to 32 (current 32)
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.592532299548735e-07 of space, bias 1.0, pg target 7.777596898646204e-05 quantized to 32 (current 32)
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006687338715334877 of space, bias 1.0, pg target 0.2006201614600463 quantized to 32 (current 32)
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0344668074946482e-06 of space, bias 4.0, pg target 0.0012413601689935778 quantized to 16 (current 16)
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:21:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.1 KiB/s wr, 146 op/s
Jan 10 12:21:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.6 KiB/s wr, 154 op/s
Jan 10 12:21:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.4 KiB/s wr, 89 op/s
Jan 10 12:21:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:21:48.923 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:21:48.925 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:21:48.926 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 10 12:21:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 10 12:21:49 np0005580781 ceph-mon[75249]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 10 12:21:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v831: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.1 KiB/s wr, 62 op/s
Jan 10 12:21:52 np0005580781 nova_compute[237049]: 2026-01-10 17:21:52.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:52 np0005580781 nova_compute[237049]: 2026-01-10 17:21:52.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:21:52 np0005580781 nova_compute[237049]: 2026-01-10 17:21:52.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:21:52 np0005580781 nova_compute[237049]: 2026-01-10 17:21:52.362 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:21:52 np0005580781 nova_compute[237049]: 2026-01-10 17:21:52.363 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.7 KiB/s wr, 50 op/s
Jan 10 12:21:53 np0005580781 nova_compute[237049]: 2026-01-10 17:21:53.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:53 np0005580781 nova_compute[237049]: 2026-01-10 17:21:53.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:53 np0005580781 nova_compute[237049]: 2026-01-10 17:21:53.463 237053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768065698.4622705, 114a4603-17a5-4e6b-b2d6-c77ef324a07d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 10 12:21:53 np0005580781 nova_compute[237049]: 2026-01-10 17:21:53.464 237053 INFO nova.compute.manager [-] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] VM Stopped (Lifecycle Event)#033[00m
Jan 10 12:21:53 np0005580781 nova_compute[237049]: 2026-01-10 17:21:53.492 237053 DEBUG nova.compute.manager [None req-74d6b09b-f480-4798-ac92-06e3b5c6750c - - - - - -] [instance: 114a4603-17a5-4e6b-b2d6-c77ef324a07d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:54 np0005580781 nova_compute[237049]: 2026-01-10 17:21:54.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:21:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.384 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.385 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.385 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.386 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.386 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:21:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2823587697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:21:55 np0005580781 nova_compute[237049]: 2026-01-10 17:21:55.994 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.250 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.252 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98824910167605GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.253 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.253 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.347 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.348 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.371 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:21:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v834: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 307 B/s wr, 13 op/s
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.891 237053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768065701.8899665, 6290fedf-9ecb-464c-8d5e-b6af64859702 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.892 237053 INFO nova.compute.manager [-] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] VM Stopped (Lifecycle Event)#033[00m
Jan 10 12:21:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:21:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724504685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.921 237053 DEBUG nova.compute.manager [None req-1ba4046f-acc8-45a7-8645-d1cd2b371baa - - - - - -] [instance: 6290fedf-9ecb-464c-8d5e-b6af64859702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.924 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.932 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.950 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.976 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:21:56 np0005580781 nova_compute[237049]: 2026-01-10 17:21:56.977 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:21:57 np0005580781 nova_compute[237049]: 2026-01-10 17:21:57.977 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:57 np0005580781 nova_compute[237049]: 2026-01-10 17:21:57.994 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:21:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:21:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v836: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:07 np0005580781 podman[243056]: 2026-01-10 17:22:07.103845039 +0000 UTC m=+0.102520535 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 10 12:22:07 np0005580781 podman[243057]: 2026-01-10 17:22:07.13950166 +0000 UTC m=+0.136745384 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 10 12:22:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:22:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:22:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:22:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:22:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:22:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:22:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v841: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:22:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372710127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:22:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:22:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372710127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:22:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:38 np0005580781 podman[243099]: 2026-01-10 17:22:38.093774589 +0000 UTC m=+0.086132424 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:22:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:22:38
Jan 10 12:22:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:22:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:22:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups']
Jan 10 12:22:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:22:38 np0005580781 podman[243100]: 2026-01-10 17:22:38.152628094 +0000 UTC m=+0.138091685 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 12:22:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:22:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:22:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:22:43 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:22:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:44 np0005580781 podman[243289]: 2026-01-10 17:22:44.455099326 +0000 UTC m=+0.062198430 container create 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 12:22:44 np0005580781 systemd[1]: Started libpod-conmon-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope.
Jan 10 12:22:44 np0005580781 podman[243289]: 2026-01-10 17:22:44.432765468 +0000 UTC m=+0.039864582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:22:44 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:22:44 np0005580781 podman[243289]: 2026-01-10 17:22:44.566897331 +0000 UTC m=+0.173996475 container init 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:22:44 np0005580781 podman[243289]: 2026-01-10 17:22:44.575138393 +0000 UTC m=+0.182237467 container start 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:22:44 np0005580781 podman[243289]: 2026-01-10 17:22:44.579687081 +0000 UTC m=+0.186786155 container attach 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 12:22:44 np0005580781 intelligent_chaum[243305]: 167 167
Jan 10 12:22:44 np0005580781 systemd[1]: libpod-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope: Deactivated successfully.
Jan 10 12:22:44 np0005580781 conmon[243305]: conmon 2da4530523d640b78dff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope/container/memory.events
Jan 10 12:22:44 np0005580781 podman[243289]: 2026-01-10 17:22:44.588611292 +0000 UTC m=+0.195710396 container died 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:22:44 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6a2267958606bc0f98a268082a811763028481617494199c899eb7de3ceb17d2-merged.mount: Deactivated successfully.
Jan 10 12:22:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:44 np0005580781 podman[243289]: 2026-01-10 17:22:44.636562141 +0000 UTC m=+0.243661245 container remove 2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:22:44 np0005580781 systemd[1]: libpod-conmon-2da4530523d640b78dff9fed569c094ae81ccbb8314544f3e36ac078a0aa07fc.scope: Deactivated successfully.
Jan 10 12:22:44 np0005580781 podman[243329]: 2026-01-10 17:22:44.869897805 +0000 UTC m=+0.056495191 container create 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 10 12:22:44 np0005580781 systemd[1]: Started libpod-conmon-8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3.scope.
Jan 10 12:22:44 np0005580781 podman[243329]: 2026-01-10 17:22:44.84485548 +0000 UTC m=+0.031452956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:22:44 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:22:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:44 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:44 np0005580781 podman[243329]: 2026-01-10 17:22:44.988188022 +0000 UTC m=+0.174785468 container init 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 10 12:22:45 np0005580781 podman[243329]: 2026-01-10 17:22:45.005430077 +0000 UTC m=+0.192027493 container start 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:22:45 np0005580781 podman[243329]: 2026-01-10 17:22:45.010496469 +0000 UTC m=+0.197093935 container attach 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:22:45 np0005580781 serene_herschel[243345]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:22:45 np0005580781 serene_herschel[243345]: --> All data devices are unavailable
Jan 10 12:22:45 np0005580781 systemd[1]: libpod-8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3.scope: Deactivated successfully.
Jan 10 12:22:45 np0005580781 podman[243329]: 2026-01-10 17:22:45.583845817 +0000 UTC m=+0.770443233 container died 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:22:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-dd5e169a1b9c6455c6e6ac6a63eb94e6eb75f58d9c1dfa8bf4221857c76c2301-merged.mount: Deactivated successfully.
Jan 10 12:22:45 np0005580781 podman[243329]: 2026-01-10 17:22:45.639880724 +0000 UTC m=+0.826478130 container remove 8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 10 12:22:45 np0005580781 systemd[1]: libpod-conmon-8b1dfa7b5fda07e80ba599cdb4b0dd5613a5babb1d324d358ce6a8fd2ae334b3.scope: Deactivated successfully.
Jan 10 12:22:46 np0005580781 podman[243439]: 2026-01-10 17:22:46.211564954 +0000 UTC m=+0.065050141 container create f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 12:22:46 np0005580781 systemd[1]: Started libpod-conmon-f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2.scope.
Jan 10 12:22:46 np0005580781 podman[243439]: 2026-01-10 17:22:46.184676677 +0000 UTC m=+0.038161924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:22:46 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:22:46 np0005580781 podman[243439]: 2026-01-10 17:22:46.303099588 +0000 UTC m=+0.156584765 container init f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:22:46 np0005580781 podman[243439]: 2026-01-10 17:22:46.313380958 +0000 UTC m=+0.166866105 container start f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:22:46 np0005580781 podman[243439]: 2026-01-10 17:22:46.31666358 +0000 UTC m=+0.170148747 container attach f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 12:22:46 np0005580781 great_edison[243455]: 167 167
Jan 10 12:22:46 np0005580781 systemd[1]: libpod-f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2.scope: Deactivated successfully.
Jan 10 12:22:46 np0005580781 podman[243439]: 2026-01-10 17:22:46.319435888 +0000 UTC m=+0.172921075 container died f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 10 12:22:46 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3ed15df06d36b5c15531209b4fa74cc2a7c01304a2691e20ab045901d5304048-merged.mount: Deactivated successfully.
Jan 10 12:22:46 np0005580781 podman[243439]: 2026-01-10 17:22:46.366814371 +0000 UTC m=+0.220299528 container remove f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_edison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:22:46 np0005580781 systemd[1]: libpod-conmon-f5b5cac624ae36a832d496737433f7f7c49919c7b710d9ac87734235d658c6c2.scope: Deactivated successfully.
Jan 10 12:22:46 np0005580781 podman[243478]: 2026-01-10 17:22:46.589829604 +0000 UTC m=+0.055760019 container create 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:22:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:46 np0005580781 systemd[1]: Started libpod-conmon-946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001.scope.
Jan 10 12:22:46 np0005580781 podman[243478]: 2026-01-10 17:22:46.566508468 +0000 UTC m=+0.032438923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:22:46 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:22:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:46 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:46 np0005580781 podman[243478]: 2026-01-10 17:22:46.702946286 +0000 UTC m=+0.168876741 container init 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:22:46 np0005580781 podman[243478]: 2026-01-10 17:22:46.714091289 +0000 UTC m=+0.180021704 container start 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 10 12:22:46 np0005580781 podman[243478]: 2026-01-10 17:22:46.718547055 +0000 UTC m=+0.184477510 container attach 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]: {
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:    "0": [
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:        {
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "devices": [
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "/dev/loop3"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            ],
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_name": "ceph_lv0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_size": "21470642176",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "name": "ceph_lv0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "tags": {
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cluster_name": "ceph",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.crush_device_class": "",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.encrypted": "0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.objectstore": "bluestore",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osd_id": "0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.type": "block",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.vdo": "0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.with_tpm": "0"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            },
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "type": "block",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "vg_name": "ceph_vg0"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:        }
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:    ],
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:    "1": [
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:        {
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "devices": [
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "/dev/loop4"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            ],
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_name": "ceph_lv1",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_size": "21470642176",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "name": "ceph_lv1",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "tags": {
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cluster_name": "ceph",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.crush_device_class": "",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.encrypted": "0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.objectstore": "bluestore",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osd_id": "1",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.type": "block",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.vdo": "0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.with_tpm": "0"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            },
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "type": "block",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "vg_name": "ceph_vg1"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:        }
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:    ],
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:    "2": [
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:        {
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "devices": [
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "/dev/loop5"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            ],
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_name": "ceph_lv2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_size": "21470642176",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "name": "ceph_lv2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "tags": {
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.cluster_name": "ceph",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.crush_device_class": "",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.encrypted": "0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.objectstore": "bluestore",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osd_id": "2",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.type": "block",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.vdo": "0",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:                "ceph.with_tpm": "0"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            },
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "type": "block",
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:            "vg_name": "ceph_vg2"
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:        }
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]:    ]
Jan 10 12:22:47 np0005580781 suspicious_lalande[243495]: }
Jan 10 12:22:47 np0005580781 systemd[1]: libpod-946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001.scope: Deactivated successfully.
Jan 10 12:22:47 np0005580781 podman[243478]: 2026-01-10 17:22:47.096080245 +0000 UTC m=+0.562010660 container died 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:22:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay-8f00adda25fe8438164ec8c244024a4bdf4b4bf2b4c903e7267fc52eabb36c52-merged.mount: Deactivated successfully.
Jan 10 12:22:47 np0005580781 podman[243478]: 2026-01-10 17:22:47.158820659 +0000 UTC m=+0.624751074 container remove 946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lalande, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 10 12:22:47 np0005580781 systemd[1]: libpod-conmon-946b66faf7562a67431779367f7dbb28df3fbf8cdaef455ba77efe4da0697001.scope: Deactivated successfully.
Jan 10 12:22:47 np0005580781 podman[243578]: 2026-01-10 17:22:47.70205142 +0000 UTC m=+0.062492589 container create 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 10 12:22:47 np0005580781 systemd[1]: Started libpod-conmon-6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92.scope.
Jan 10 12:22:47 np0005580781 podman[243578]: 2026-01-10 17:22:47.673541128 +0000 UTC m=+0.033982337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:22:47 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:22:47 np0005580781 podman[243578]: 2026-01-10 17:22:47.787276897 +0000 UTC m=+0.147718086 container init 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:22:47 np0005580781 podman[243578]: 2026-01-10 17:22:47.79520182 +0000 UTC m=+0.155642969 container start 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 12:22:47 np0005580781 podman[243578]: 2026-01-10 17:22:47.799296655 +0000 UTC m=+0.159737864 container attach 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:22:47 np0005580781 mystifying_boyd[243594]: 167 167
Jan 10 12:22:47 np0005580781 systemd[1]: libpod-6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92.scope: Deactivated successfully.
Jan 10 12:22:47 np0005580781 podman[243578]: 2026-01-10 17:22:47.804004818 +0000 UTC m=+0.164445937 container died 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 10 12:22:47 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3a3c8007052594733bd75da49ccfe8a0bfde96cd181635c9e4f3f80d03bb5c8e-merged.mount: Deactivated successfully.
Jan 10 12:22:47 np0005580781 podman[243578]: 2026-01-10 17:22:47.856131904 +0000 UTC m=+0.216573063 container remove 6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 10 12:22:47 np0005580781 systemd[1]: libpod-conmon-6160c383596c7a292694a16a2e051fd59c789916cb56518a1a3c93bcd4bd7f92.scope: Deactivated successfully.
Jan 10 12:22:48 np0005580781 podman[243616]: 2026-01-10 17:22:48.084149618 +0000 UTC m=+0.072945883 container create b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:22:48 np0005580781 systemd[1]: Started libpod-conmon-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope.
Jan 10 12:22:48 np0005580781 podman[243616]: 2026-01-10 17:22:48.053617659 +0000 UTC m=+0.042414004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:22:48 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:22:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:48 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:22:48 np0005580781 podman[243616]: 2026-01-10 17:22:48.184995725 +0000 UTC m=+0.173792000 container init b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 10 12:22:48 np0005580781 podman[243616]: 2026-01-10 17:22:48.192129026 +0000 UTC m=+0.180925291 container start b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:22:48 np0005580781 podman[243616]: 2026-01-10 17:22:48.196628262 +0000 UTC m=+0.185424537 container attach b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:22:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:22:48.936 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:22:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:22:48.940 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:22:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:22:48.940 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:22:48 np0005580781 lvm[243710]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:22:48 np0005580781 lvm[243710]: VG ceph_vg0 finished
Jan 10 12:22:48 np0005580781 lvm[243711]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:22:48 np0005580781 lvm[243711]: VG ceph_vg1 finished
Jan 10 12:22:48 np0005580781 lvm[243713]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:22:48 np0005580781 lvm[243713]: VG ceph_vg2 finished
Jan 10 12:22:49 np0005580781 lvm[243715]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:22:49 np0005580781 lvm[243715]: VG ceph_vg2 finished
Jan 10 12:22:49 np0005580781 awesome_herschel[243632]: {}
Jan 10 12:22:49 np0005580781 systemd[1]: libpod-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope: Deactivated successfully.
Jan 10 12:22:49 np0005580781 systemd[1]: libpod-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope: Consumed 1.397s CPU time.
Jan 10 12:22:49 np0005580781 podman[243616]: 2026-01-10 17:22:49.08777917 +0000 UTC m=+1.076575415 container died b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:22:49 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6e58901899483886f27121af1cc8ef65799199c9c2a6adde1f675f770326c661-merged.mount: Deactivated successfully.
Jan 10 12:22:49 np0005580781 podman[243616]: 2026-01-10 17:22:49.141364807 +0000 UTC m=+1.130161042 container remove b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_herschel, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 10 12:22:49 np0005580781 systemd[1]: libpod-conmon-b70ff93e5d8c689efa1a1c931dece4158218d1a858a6c12dc65a83a09c5fa08c.scope: Deactivated successfully.
Jan 10 12:22:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:22:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:22:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:22:49 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:22:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:22:50 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:22:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:52 np0005580781 nova_compute[237049]: 2026-01-10 17:22:52.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:52 np0005580781 nova_compute[237049]: 2026-01-10 17:22:52.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:22:52 np0005580781 nova_compute[237049]: 2026-01-10 17:22:52.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:22:52 np0005580781 nova_compute[237049]: 2026-01-10 17:22:52.378 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:22:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:53 np0005580781 nova_compute[237049]: 2026-01-10 17:22:53.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:53 np0005580781 nova_compute[237049]: 2026-01-10 17:22:53.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:54 np0005580781 systemd-logind[798]: New session 52 of user zuul.
Jan 10 12:22:54 np0005580781 systemd[1]: Started Session 52 of User zuul.
Jan 10 12:22:55 np0005580781 nova_compute[237049]: 2026-01-10 17:22:55.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:55 np0005580781 nova_compute[237049]: 2026-01-10 17:22:55.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:55 np0005580781 nova_compute[237049]: 2026-01-10 17:22:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:55 np0005580781 nova_compute[237049]: 2026-01-10 17:22:55.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:22:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.388 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.390 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.390 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.391 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.392 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:22:57 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:22:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:22:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558181602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:22:57 np0005580781 nova_compute[237049]: 2026-01-10 17:22:57.980 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.149 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.150 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.150 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.151 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.254 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.254 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.294 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:22:58 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14698 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:22:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:22:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:22:58 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322899128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.860 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.868 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.890 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.892 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:22:58 np0005580781 nova_compute[237049]: 2026-01-10 17:22:58.893 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:22:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 10 12:22:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587073077' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 12:22:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:22:59 np0005580781 nova_compute[237049]: 2026-01-10 17:22:59.892 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:23:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:23:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:04 np0005580781 ovs-vsctl[244103]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 10 12:23:06 np0005580781 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 10 12:23:06 np0005580781 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 10 12:23:06 np0005580781 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 10 12:23:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:06 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: cache status {prefix=cache status} (starting...)
Jan 10 12:23:06 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: client ls {prefix=client ls} (starting...)
Jan 10 12:23:07 np0005580781 lvm[244448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:23:07 np0005580781 lvm[244448]: VG ceph_vg0 finished
Jan 10 12:23:07 np0005580781 lvm[244455]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:23:07 np0005580781 lvm[244455]: VG ceph_vg2 finished
Jan 10 12:23:07 np0005580781 lvm[244459]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:23:07 np0005580781 lvm[244459]: VG ceph_vg1 finished
Jan 10 12:23:07 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14704 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:07 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: damage ls {prefix=damage ls} (starting...)
Jan 10 12:23:07 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump loads {prefix=dump loads} (starting...)
Jan 10 12:23:07 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14706 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:07 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 10 12:23:08 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 10 12:23:08 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 10 12:23:08 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14708 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:08 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 10 12:23:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 10 12:23:08 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534400192' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 10 12:23:08 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 10 12:23:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:08 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 10 12:23:08 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14712 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:08 np0005580781 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:23:08 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:23:08.844+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:23:08 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:23:08 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192949448' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:23:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:23:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:23:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:23:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:23:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:23:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:23:09 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: ops {prefix=ops} (starting...)
Jan 10 12:23:09 np0005580781 podman[244671]: 2026-01-10 17:23:09.079653232 +0000 UTC m=+0.074257530 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 10 12:23:09 np0005580781 podman[244682]: 2026-01-10 17:23:09.119635197 +0000 UTC m=+0.108969217 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691955246' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165695532' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.707544) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789707730, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1918, "num_deletes": 256, "total_data_size": 2019380, "memory_usage": 2057312, "flush_reason": "Manual Compaction"}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 10 12:23:09 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: session ls {prefix=session ls} (starting...)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789735290, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1373494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15812, "largest_seqno": 17729, "table_properties": {"data_size": 1366255, "index_size": 4057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17345, "raw_average_key_size": 20, "raw_value_size": 1350731, "raw_average_value_size": 1631, "num_data_blocks": 182, "num_entries": 828, "num_filter_entries": 828, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768065636, "oldest_key_time": 1768065636, "file_creation_time": 1768065789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 27791 microseconds, and 9819 cpu microseconds.
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.735353) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1373494 bytes OK
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.735381) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738004) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738025) EVENT_LOG_v1 {"time_micros": 1768065789738020, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738047) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2011086, prev total WAL file size 2011086, number of live WAL files 2.
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1341KB)], [38(5816KB)]
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789739125, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 7329086, "oldest_snapshot_seqno": -1}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 3959 keys, 5814869 bytes, temperature: kUnknown
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789789368, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 5814869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5786784, "index_size": 17095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 93119, "raw_average_key_size": 23, "raw_value_size": 5714042, "raw_average_value_size": 1443, "num_data_blocks": 735, "num_entries": 3959, "num_filter_entries": 3959, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768065789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.789622) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 5814869 bytes
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.791114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.6 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.7 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(9.6) write-amplify(4.2) OK, records in: 4411, records dropped: 452 output_compression: NoCompression
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.791140) EVENT_LOG_v1 {"time_micros": 1768065789791123, "job": 18, "event": "compaction_finished", "compaction_time_micros": 50333, "compaction_time_cpu_micros": 22117, "output_level": 6, "num_output_files": 1, "total_output_size": 5814869, "num_input_records": 4411, "num_output_records": 3959, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789791555, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768065789792802, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.738815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:23:09.792928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:23:09 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: status {prefix=status} (starting...)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964299435' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 10 12:23:09 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633338918' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 10 12:23:10 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14726 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:10 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 12:23:10 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561152790' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 12:23:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:10 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14728 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 12:23:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43030912' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 12:23:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 10 12:23:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813902671' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 10 12:23:11 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 12:23:11 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220948933' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 12:23:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 10 12:23:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164187188' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 10 12:23:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 10 12:23:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700083542' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 12:23:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:12 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:12 np0005580781 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 12:23:12 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:23:12.681+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 12:23:12 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 12:23:12 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576411142' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 12:23:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 10 12:23:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1168958456' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 10 12:23:13 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14746 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001029 2 0.000015
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001028 2 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000041 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001112 2 0.000032
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001170 2 0.000011
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001207 2 0.000016
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001218 2 0.000013
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001240 2 0.000012
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000055 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000001 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001316 2 0.000015
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001347 2 0.000021
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000562 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001381 2 0.000016
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001416 2 0.000017
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001437 2 0.000022
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001465 2 0.000017
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001522 2 0.000014
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001567 2 0.000015
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001582 2 0.000037
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000049 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001696 2 0.000021
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.001770 2 0.000013
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57753600 unmapped: 2007040 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 40 handle_osd_map epochs [40,41], i have 40, src has [1,41]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010594 4 0.000067
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010683 4 0.000097
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010680 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010791 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009026 4 0.000278
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009186 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009239 4 0.000087
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009776 4 0.000111
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009838 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009316 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009457 4 0.000140
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009572 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011581 4 0.000110
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011640 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009661 4 0.000180
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009811 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009595 4 0.000237
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009974 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011530 4 0.000080
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011579 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011058 4 0.000161
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010723 4 0.000099
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011289 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010773 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010667 4 0.000072
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010721 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010703 4 0.000049
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010744 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011468 4 0.000232
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011621 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010609 4 0.000399
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010903 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010640 4 0.000040
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 75.884468079s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013617 3 0.000174
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010594 4 0.000636
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011210 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 75.884468079s@ mbc={}] exit Started/Primary/Peering 1.013764 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=12.634835243s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 75.884468079s@ mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011321 4 0.000091
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011370 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011250 4 0.000118
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011334 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011445 4 0.000201
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011578 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010844 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011549 4 0.000073
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011629 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010090 4 0.000062
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010130 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010867 4 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010915 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010748 4 0.000156
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010864 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011130 4 0.000039
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011158 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012697 4 0.000167
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012828 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011150 4 0.000045
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011249 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010944 4 0.000072
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010997 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010865 4 0.000073
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010915 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012694 4 0.000066
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012745 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004155 3 0.000355
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008803 3 0.000142
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009004 3 0.000411
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008780 3 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008963 3 0.000092
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008894 3 0.000115
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009030 3 0.000236
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008887 3 0.000363
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008809 3 0.000052
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008917 3 0.000078
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009024 3 0.000209
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009179 3 0.000043
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009198 3 0.000200
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009099 3 0.000052
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009051 3 0.000092
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008979 3 0.000061
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008973 3 0.000049
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008901 3 0.000063
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009013 3 0.000093
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008853 3 0.000075
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008871 3 0.000051
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000050 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008972 3 0.000139
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009009 3 0.000071
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009039 3 0.000060
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009039 3 0.000117
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009334 3 0.000113
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009005 3 0.000060
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009004 3 0.000077
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009253 3 0.000729
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008987 3 0.000049
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009796 3 0.000923
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008855 3 0.000038
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/22 les/c/f=41/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57901056 unmapped: 2908160 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 325799 data_alloc: 218103808 data_used: 0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57917440 unmapped: 2891776 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57925632 unmapped: 2883584 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 41 handle_osd_map epochs [42,42], i have 41, src has [1,42]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.575145721s of 10.665144920s, submitted: 238
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 42 heartbeat osd_stat(store_statfs(0x4fe164000/0x0/0x4ffc00000, data 0x291bd/0x68000, compress 0x0/0x0/0x0, omap 0x4878, meta 0x1a2b788), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 42 handle_osd_map epochs [43,43], i have 42, src has [1,43]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 57982976 unmapped: 2826240 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 58130432 unmapped: 2678784 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 58130432 unmapped: 2678784 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 336425 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 58146816 unmapped: 2662400 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 43 handle_osd_map epochs [43,44], i have 43, src has [1,44]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.973673 7 0.000115
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.982894 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.993606 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.993657 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.974040 7 0.000128
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.982990 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.992219 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.992350 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.053740 21 0.000199
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.060567 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.053251 21 0.000146
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.060701 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.060479 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.060522 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.060549 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.946036339s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196174622s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.060804 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025462151s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275505066s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945967674s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196235657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025725365s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275756836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] exit Reset 0.000337 1 0.000466
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945830345s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196235657s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] exit Reset 0.000780 1 0.000984
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] exit Reset 0.000728 1 0.000890
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] exit Start 0.000024 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.024790764s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275505066s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] exit Start 0.000211 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.025278091s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275756836s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.055061 21 0.000237
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.061608 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.061722 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.061816 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944757462s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195938110s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] exit Reset 0.000551 1 0.001240
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] exit Reset 0.000096 1 0.000174
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.055358 21 0.000886
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] exit Start 0.000042 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944697380s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195938110s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.055371 21 0.000109
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.061839 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.062584 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.061939 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.062646 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.062511 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.062604 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944560051s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196243286s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] exit Reset 0.000154 1 0.000241
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.944519043s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196243286s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943988800s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195838928s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.975965 7 0.000104
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.984812 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.994691 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] exit Reset 0.000125 1 0.000673
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.994732 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023788452s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.275848389s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] exit Start 0.000145 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] exit Reset 0.000147 1 0.000249
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] exit Start 0.000046 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023736954s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.275848389s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.943914413s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195838928s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.056667 21 0.000080
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.976379 7 0.000060
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.985402 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.994990 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.995049 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] exit Start 0.000238 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.945515633s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196174622s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023344040s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276062012s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] exit Reset 0.000139 1 0.000224
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.063647 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.063952 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] exit Start 0.000023 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.023289680s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276062012s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.064086 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942822456s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195831299s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.976894 7 0.000129
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.985842 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.997499 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.997528 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022882462s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276046753s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] exit Reset 0.000065 1 0.000118
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022849083s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276046753s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] exit Reset 0.000347 1 0.000646
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.976969 7 0.000045
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] exit Start 0.000071 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.986092 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.995923 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.995966 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.942521095s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195831299s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022719383s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276260376s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.057922 21 0.000108
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.064806 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.064861 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] exit Reset 0.000105 1 0.000197
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022646904s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276260376s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.065001 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.977027 7 0.000245
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.986294 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.997609 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941881180s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195747375s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] exit Reset 0.000174 1 0.000330
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.997724 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941844940s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195747375s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022457123s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276466370s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] exit Reset 0.000138 1 0.000310
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] exit Start 0.000019 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.022409439s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276466370s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.058130 21 0.000105
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.065189 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.065574 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.065639 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.977923 7 0.000143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941703796s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.196022034s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.986918 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.998548 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.998606 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] exit Reset 0.000133 1 0.000313
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021822929s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276275635s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] exit Reset 0.000100 1 0.000238
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.941620827s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.196022034s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] exit Start 0.000024 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021756172s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276275635s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.977909 7 0.000065
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.987123 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.997898 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.997932 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.059719 21 0.000083
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.066785 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021719933s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276496887s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.066844 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.067025 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] exit Reset 0.000343 1 0.000424
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021435738s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276496887s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.978341 7 0.000062
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.987443 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.998403 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939940453s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194961548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.998578 0 0.000000
Jan 10 12:23:13 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] exit Reset 0.000340 1 0.000617
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021199226s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276519775s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] exit Reset 0.000091 1 0.000243
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.021158218s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276519775s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.059897 21 0.000081
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.066835 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.066978 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.067445 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939948082s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195663452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] exit Start 0.000125 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939641953s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194961548s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] exit Reset 0.000291 1 0.000388
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.939791679s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195663452s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.979177 7 0.000045
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.060807 21 0.000469
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.068370 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.068426 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.988218 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.999618 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.978958 7 0.000370
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.988196 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.999547 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.999577 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.999685 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020483971s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276596069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] exit Reset 0.000039 1 0.000058
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020464897s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276596069s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.068460 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938565254s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194801331s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] exit Reset 0.000079 1 0.000284
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] exit Start 0.000035 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.938516617s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194801331s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.061614 21 0.000108
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.068831 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.068993 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.069192 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937766075s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194831848s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.060440 21 0.000369
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.069561 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.069695 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.020411491s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276550293s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] exit Reset 0.001305 1 0.001424
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.019139290s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276550293s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.069739 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] exit Reset 0.000800 1 0.000991
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937311172s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194831848s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937079430s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194824219s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] exit Reset 0.000110 1 0.002129
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.937025070s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194824219s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.063220 21 0.000168
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070373 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.070464 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.070493 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936864853s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194839478s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.981005 7 0.000161
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990012 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.001656 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.063546 21 0.000074
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070611 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.070805 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.070849 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.001799 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] exit Reset 0.000066 1 0.000137
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936842918s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194839478s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1329769' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.980890 7 0.000070
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990684 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.002294 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018429756s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.981281 7 0.000066
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Reset 0.000514 1 0.000701
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017952919s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990662 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.000895 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.000914 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.063831 21 0.000073
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070914 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.070963 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018333435s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277122498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.071655 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] exit Reset 0.000037 1 0.000151
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.018314362s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277122498s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936121941s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194946289s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] exit Reset 0.000042 1 0.000081
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.936101913s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194946289s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.002343 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.981927 7 0.000057
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.990945 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.001873 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.065055 21 0.000086
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.001903 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.071680 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.071926 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.071972 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.058743 21 0.000101
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.070621 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017609596s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276664734s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] exit Reset 0.000064 1 0.000122
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017589569s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276664734s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934791565s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Reset 0.000230 1 0.000257
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.064819 21 0.000292
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.072218 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934760094s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.072282 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.072312 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934700012s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194023132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Reset 0.000043 1 0.000081
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934677124s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194023132s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017175674s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.276603699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Reset 0.000575 1 0.001736
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.065705 21 0.000158
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.072747 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.073102 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.073220 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] exit Start 0.000109 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934269905s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193969727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.071390 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] exit Reset 0.000100 1 0.000122
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.934208870s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193969727s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.982477 7 0.000070
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.991565 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.002841 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.002858 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017211914s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277099609s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.017109871s) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.276603699s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.069621 21 0.000198
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.073403 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.073530 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.073556 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.065992 21 0.000181
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930295944s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.190406799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] exit Reset 0.000078 1 0.000160
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.930240631s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.190406799s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.073559 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.073859 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.982871 7 0.000066
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.991946 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.002959 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.073993 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.003007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016772270s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277183533s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] exit Reset 0.000074 1 0.000160
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016730309s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277183533s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933568954s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194084167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] exit Reset 0.000204 1 0.000480
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.983269 7 0.000066
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.992312 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.003257 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] exit Start 0.000082 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933501244s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194084167s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.066844 21 0.000206
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.073601 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.074550 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.003318 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] exit Reset 0.001036 1 0.000197
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016199112s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277099609s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016240120s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277198792s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] exit Reset 0.000070 1 0.000300
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.074736 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] exit Start 0.000097 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.016202927s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277198792s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932877541s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.193984985s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] exit Reset 0.000164 1 0.000365
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 6.983774 7 0.000055
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 6.992861 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.005622 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.005661 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015887260s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 active pruub 80.277290344s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] exit Reset 0.000046 1 0.000228
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=9.015865326s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY pruub 80.277290344s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] exit Start 0.000074 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932753563s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.193984985s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.071466 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933269501s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.195274353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] exit Reset 0.000101 1 0.007726
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932654381s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 active pruub 82.194847107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] exit Reset 0.004133 1 0.004147
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.932610512s) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.194847107s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000031
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44 pruub=10.933234215s) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY pruub 82.195274353s@ mbc={}] enter Started/Stray
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000243 1 0.000047
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000493 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000016
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000136 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000202 1 0.000035
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001624 1 0.000443
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000225 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000031
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000615 1 0.000147
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000948
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000240 1 0.000216
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000269 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000027
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000159 1 0.000041
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000165 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000067 1 0.000095
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000086 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000137 1 0.000235
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000115 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000033
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000133 1 0.000047
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000025
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000123 1 0.000039
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000036
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000116 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000059
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000081 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000226
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000087 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000027
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000047
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000199 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000016
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000120 1 0.000033
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000091 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000249 1 0.000034
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000171 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000033
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000060
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000067 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000241
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000175 1 0.000034
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000032
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.025449 2 0.000044
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.025367 2 0.000705
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=0 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000029
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000267 1 0.000077
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011889 2 0.000164
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012354 2 0.000027
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013651 2 0.000289
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008549 2 0.000217
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007857 2 0.000095
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.029087 2 0.000237
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007148 2 0.000035
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007442 2 0.000045
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006930 2 0.000029
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000141 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006781 2 0.000072
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006438 2 0.000041
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003838 2 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000180 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000024
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000415 1 0.000052
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000035
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000158 1 0.000114
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007023 2 0.000085
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009133 2 0.000026
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008001 2 0.002376
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005219 2 0.000025
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004372 2 0.000061
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008707 2 0.000062
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000069 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000018
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000104 1 0.000040
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000021
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000187 1 0.000138
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000017
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000028
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000028
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000145 1 0.000038
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000033
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000176 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000035
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001008 1 0.000639
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000175 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000036
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000076
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008695 2 0.000116
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006956 2 0.000075
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006367 2 0.000050
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005876 2 0.000039
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005468 2 0.000042
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004559 2 0.000035
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005093 2 0.000049
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002853 2 0.000104
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002377 2 0.000060
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 868352 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 44 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114002 2 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114601 2 0.000228
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.123926 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114244 2 0.000084
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.119599 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114656 2 0.000028
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.120758 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.118133 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114641 2 0.000058
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.120273 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114638 2 0.000031
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114905 2 0.000039
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.119306 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.122074 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114952 2 0.000056
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.121476 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.114429 2 0.000035
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.117033 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125372 2 0.000194
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.154879 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127425 2 0.000045
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127619 2 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.153342 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.126785 2 0.000055
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.141191 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.155005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127108 2 0.000023
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127085 2 0.000034
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.139374 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.139641 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127189 2 0.000026
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.135291 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127243 2 0.000022
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.136255 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127339 2 0.000145
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134656 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127193 2 0.000140
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.135108 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127180 2 0.000068
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133902 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127475 2 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134462 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.128018 2 0.000043
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.135059 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124656 2 0.000031
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133952 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124769 2 0.000029
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132972 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125000 2 0.000048
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129703 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125328 2 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132577 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.124669 2 0.000065
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134342 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125522 2 0.000045
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130830 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.128557 2 0.000032
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132610 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004617 4 0.000281
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004712 4 0.000175
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004735 4 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004643 4 0.000036
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004599 4 0.000060
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004610 4 0.000058
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004546 4 0.000071
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004581 4 0.000034
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009560 4 0.001011
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008240 4 0.000103
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008074 4 0.000656
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007925 4 0.000169
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008526 4 0.000181
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000044 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009459 4 0.000239
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008065 4 0.000317
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007777 4 0.000243
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007681 4 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008045 4 0.000510
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007411 4 0.000193
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007149 4 0.000084
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006970 4 0.000097
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000029 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006887 4 0.000085
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007985 4 0.000435
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177982 7 0.000118
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177145 7 0.000201
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177578 7 0.000085
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179996 7 0.000068
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173939 7 0.000086
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.172051 7 0.000525
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.172916 7 0.000089
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173103 7 0.000063
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.172031 7 0.000061
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.169954 7 0.000700
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.169114 7 0.000807
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174467 7 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000251 1 0.000048
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177843 7 0.000152
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.171234 7 0.000245
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.171819 7 0.000052
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176589 7 0.000097
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175796 7 0.001971
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.180718 7 0.000269
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181530 7 0.000194
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181350 7 0.000090
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000968 1 0.000048
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011871 4 0.000055
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012009 4 0.000197
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001125 1 0.000112
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011588 4 0.000511
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012914 4 0.000517
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011625 4 0.000619
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001182 1 0.000020
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=44/40 les/c/f=45/43/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001300 1 0.000019
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001382 1 0.000023
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012172 4 0.001145
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001492 1 0.000062
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/18 lis/c=44/37 les/c/f=45/38/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001652 1 0.000079
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001647 1 0.000070
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001693 1 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001752 1 0.000026
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001796 1 0.000139
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001851 1 0.000133
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001829 1 0.000051
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001855 1 0.000019
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001917 1 0.000112
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001941 1 0.000039
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002117 1 0.000170
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.182100 7 0.002075
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.182629 7 0.000442
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181586 7 0.000096
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181368 7 0.000372
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.181991 7 0.000161
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.180292 7 0.000146
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175600 7 0.000115
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179851 7 0.000101
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179175 7 0.000538
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002847 1 0.000067
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.178644 7 0.000140
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176859 7 0.000295
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.177490 7 0.000141
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.179265 7 0.000054
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.182918 7 0.000188
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176404 7 0.000089
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002561 1 0.000830
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173060 7 0.000075
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176084 7 0.000047
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.184455 7 0.000552
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000633 1 0.000039
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174539 7 0.001357
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176340 7 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175023 7 0.000049
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174149 7 0.000097
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174474 7 0.000155
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000819 1 0.000044
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000966 1 0.000034
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001175 1 0.000046
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001237 1 0.000023
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001341 1 0.000018
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001436 1 0.000145
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001438 1 0.000028
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001530 1 0.000076
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001713 1 0.000023
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001760 1 0.000047
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001757 1 0.000062
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001754 1 0.000073
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001760 1 0.000096
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001710 1 0.000024
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001791 1 0.000078
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001842 1 0.000046
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001789 1 0.000129
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001828 1 0.000093
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001823 1 0.000042
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001970 1 0.000226
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001894 1 0.000120
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001041 1 0.001719
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.010814 1 0.000047
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.011124 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.14( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.189181 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.016018 1 0.000030
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.017041 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.15( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.194700 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.023133 1 0.000068
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.024312 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.201556 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030795 1 0.000113
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.032081 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.212116 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037749 1 0.000078
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039108 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.213085 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045062 1 0.000073
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046509 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.219476 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052527 1 0.000155
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.054143 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.3( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.226420 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.059856 1 0.000036
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061582 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.2( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.234744 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067515 1 0.000051
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.069228 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.239845 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074574 1 0.000064
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.076360 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.245540 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082061 1 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.083860 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.5( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.258363 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.089295 1 0.000063
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.091265 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.263374 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096624 1 0.000047
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098529 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276605 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103968 1 0.000028
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105893 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.7( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.282584 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111432 1 0.000023
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.113338 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.294127 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 630784 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118422 1 0.000021
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.120440 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.291838 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126044 1 0.000062
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.128039 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.309631 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.132905 1 0.000578
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.135065 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [0] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.306926 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139443 1 0.000086
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.142336 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1e( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.323774 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146741 1 0.000056
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.149611 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.4( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [0] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.325915 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153911 1 0.000028
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.154573 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.338121 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.161082 1 0.000030
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.161941 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.344774 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168168 1 0.000229
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.169364 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.350899 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175822 1 0.000038
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.177059 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.12( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.359187 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182836 1 0.000076
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.184135 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.16( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.364562 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190208 1 0.000034
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.191586 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.367214 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197582 1 0.000060
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.199143 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.13( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.380780 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204732 1 0.000120
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.206235 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.385876 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.211951 1 0.000033
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.213700 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.390599 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.219742 1 0.000206
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.221354 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.400078 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226734 1 0.000026
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.228541 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.406081 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.233847 1 0.000020
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.235643 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.414942 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241535 1 0.000025
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.243342 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.11( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.426393 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248931 1 0.000023
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.250751 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.427222 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256149 1 0.000020
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.257917 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.434038 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.263616 1 0.000072
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.265491 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=-1 lpr=44 pi=[35,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.438594 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270587 1 0.000053
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.272482 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1d( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.457276 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277937 1 0.000030
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.279773 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.f( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.456242 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.285694 1 0.000023
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.287563 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.1a( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.462693 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.292778 1 0.000022
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.294667 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.19( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.469356 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.300082 1 0.000063
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.302156 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.c( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.476769 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.307604 1 0.000036
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.310377 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.9( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.490278 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.314784 1 0.000049
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.316793 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 45 pg[5.18( empty lb MIN local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.490987 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 45 heartbeat osd_stat(store_statfs(0x4fe159000/0x0/0x4ffc00000, data 0x2d791/0x71000, compress 0x0/0x0/0x0, omap 0x5019, meta 0x1a2afe7), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60104704 unmapped: 704512 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 671744 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 45 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2ec21/0x74000, compress 0x0/0x0/0x0, omap 0x52a4, meta 0x1a2ad5c), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 311352 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 671744 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 671744 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.037272453s of 10.459216118s, submitted: 327
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 655360 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 45 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2ec21/0x74000, compress 0x0/0x0/0x0, omap 0x52a4, meta 0x1a2ad5c), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 45 handle_osd_map epochs [46,46], i have 45, src has [1,46]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 45 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60162048 unmapped: 647168 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe153000/0x0/0x4ffc00000, data 0x30237/0x77000, compress 0x0/0x0/0x0, omap 0x552f, meta 0x1a2aad1), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 46 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 638976 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 321722 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 47 heartbeat osd_stat(store_statfs(0x4fe14e000/0x0/0x4ffc00000, data 0x316b7/0x7a000, compress 0x0/0x0/0x0, omap 0x57ba, meta 0x1a2a846), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 47 handle_osd_map epochs [48,48], i have 48, src has [1,48]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60227584 unmapped: 581632 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60243968 unmapped: 565248 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60243968 unmapped: 565248 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60268544 unmapped: 540672 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe148000/0x0/0x4ffc00000, data 0x3414d/0x80000, compress 0x0/0x0/0x0, omap 0x5cd0, meta 0x1a2a330), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 49 handle_osd_map epochs [50,51], i have 49, src has [1,51]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 49 handle_osd_map epochs [50,51], i have 51, src has [1,51]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 557056 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 340017 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 557056 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60284928 unmapped: 524288 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60284928 unmapped: 524288 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.106193542s of 11.144284248s, submitted: 15
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 52 heartbeat osd_stat(store_statfs(0x4fe144000/0x0/0x4ffc00000, data 0x36be3/0x86000, compress 0x0/0x0/0x0, omap 0x5f5b, meta 0x1a2a0a5), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60342272 unmapped: 1515520 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 1466368 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 347542 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 1433600 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 1433600 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 1458176 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 1458176 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 54 heartbeat osd_stat(store_statfs(0x4fe13d000/0x0/0x4ffc00000, data 0x3ac8f/0x8f000, compress 0x0/0x0/0x0, omap 0x66fc, meta 0x1a29904), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60407808 unmapped: 1449984 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 354420 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60407808 unmapped: 1449984 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60407808 unmapped: 1449984 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 54 handle_osd_map epochs [55,56], i have 54, src has [1,56]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=0 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000150 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=0 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000048
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000289 1 0.000065
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001192 2 0.000136
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 56 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 1417216 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.811351776s of 10.216451645s, submitted: 16
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 1392640 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 56 handle_osd_map epochs [56,57], i have 57, src has [1,57]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007088 2 0.000096
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008713 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002444 4 0.000176
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 57 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/39 les/c/f=57/42/0 sis=56) [2] r=0 lpr=56 pi=[39,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 1376256 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe132000/0x0/0x4ffc00000, data 0x3ed3b/0x98000, compress 0x0/0x0/0x0, omap 0x6c12, meta 0x1a293ee), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372924 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 1368064 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x40351/0x9b000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 58 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 262144 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x40351/0x9b000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 229376 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 155648 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386526 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x44267/0xa4000, compress 0x0/0x0/0x0, omap 0x763e, meta 0x1a289c2), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.029172897s of 11.149907112s, submitted: 15
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402333 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe115000/0x0/0x4ffc00000, data 0x4af47/0xb3000, compress 0x0/0x0/0x0, omap 0x82f5, meta 0x1a27d0b), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 180224 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407980 data_alloc: 218103808 data_used: 858
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fe110000/0x0/0x4ffc00000, data 0x4c55d/0xb6000, compress 0x0/0x0/0x0, omap 0x8580, meta 0x1a27a80), peers [0,1] op hist [])
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f(unlocked)] enter Initial
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001224 0 0.000000
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000076 1 0.000192
Jan 10 12:23:13 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000776 0 0.000000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000444 1 0.001106
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000831 2 0.000183
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 98304 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012900 2 0.000203
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.014360 0 0.000000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.003906 3 0.000352
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000141 1 0.000221
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.137376 3 0.000191
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416166 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 57344 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.526871681s of 15.590026855s, submitted: 16
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418577 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 8192 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 1040384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424802 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427213 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 983040 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 974848 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.689327240s of 15.030009270s, submitted: 10
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429626 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 958464 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 950272 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 942080 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.899189949s of 10.917829514s, submitted: 4
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436865 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 868352 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 851968 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 441687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 835584 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884464264s of 10.945921898s, submitted: 8
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446511 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 802816 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 778240 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 770048 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451337 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 753664 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.035273552s of 12.054781914s, submitted: 8
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 745472 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 712704 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 671744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458576 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.947065353s of 12.031906128s, submitted: 6
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62251008 unmapped: 655360 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 463402 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 638976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14750 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.993368149s of 15.010603905s, submitted: 6
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470637 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 573440 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 565248 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473048 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.969283104s of 11.986205101s, submitted: 8
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 532480 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 499712 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 485103 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.836256981s of 13.859765053s, submitted: 8
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 458752 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 450560 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 487516 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 434176 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489927 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 401408 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.158172607s of 11.166720390s, submitted: 4
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494751 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497162 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 311296 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 229376 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 57344 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 16.31 MB, 0.03 MB/s#012Interval WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: mgrc ms_handle_reset ms_handle_reset con 0x5621df718000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: mgrc handle_mgr_configure stats_period=5
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1018.819763184s of 1018.845520020s, submitted: 10
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1335296 heap: 70705152 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe10c000/0x0/0x4ffc00000, data 0x4f210/0xbe000, compress 0x0/0x0/0x0, omap 0x8be6, meta 0x1a2741a), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 70 ms_handle_reset con 0x5621e1456c00 session 0x5621e19bee00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 10534912 heap: 75366400 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 583003 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 18604032 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 72 ms_handle_reset con 0x5621e1457000 session 0x5621e19db180
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 18563072 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587971 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.282839775s of 10.690481186s, submitted: 34
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.774143219s of 35.782062531s, submitted: 13
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 18374656 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1457400 session 0x5621deefbdc0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 74 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc493c/0xd3f000, compress 0x0/0x0/0x0, omap 0x9a85, meta 0x1a2657b), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03c00 session 0x5621e195a000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 18046976 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03400 session 0x5621e1462a80
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 75 ms_handle_reset con 0x5621e1457800 session 0x5621e1462fc0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604284 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 17104896 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1457400 session 0x5621e077b6c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1456c00 session 0x5621e0a22e00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02c00 session 0x5621e067ea80
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 76 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc78e6/0xd47000, compress 0x0/0x0/0x0, omap 0xa435, meta 0x1a25bcb), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02800 session 0x5621df500000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 17080320 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 17088512 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 77 ms_handle_reset con 0x5621e1456c00 session 0x5621e077b180
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 15884288 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 78 ms_handle_reset con 0x5621e1457400 session 0x5621e19da380
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 15704064 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610354 data_alloc: 218103808 data_used: 858
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 15663104 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 79 ms_handle_reset con 0x5621e1bcec00 session 0x5621e14636c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd47a000/0x0/0x4ffc00000, data 0xccb716/0xd4d000, compress 0x0/0x0/0x0, omap 0xa7da, meta 0x1a25826), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 15753216 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 81 ms_handle_reset con 0x5621e1c03400 session 0x5621e0a23a40
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 15540224 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 82 ms_handle_reset con 0x5621e1c02c00 session 0x5621df8eca80
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 15392768 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.066205978s of 10.445180893s, submitted: 195
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947dc0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1457000 session 0x5621df500380
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 15171584 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 84 ms_handle_reset con 0x5621e1457800 session 0x5621e0a22700
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645550 data_alloc: 218103808 data_used: 4919
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 15007744 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fd468000/0x0/0x4ffc00000, data 0xcd301e/0xd60000, compress 0x0/0x0/0x0, omap 0xcff3, meta 0x1a2300d), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 85 ms_handle_reset con 0x5621e1457400 session 0x5621e19be8c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 14778368 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c27/0xd6b000, compress 0x0/0x0/0x0, omap 0xdcb6, meta 0x1a2234a), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 13451264 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 86 ms_handle_reset con 0x5621e1456c00 session 0x5621e19da8c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 21553152 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 20324352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 87 ms_handle_reset con 0x5621e1457000 session 0x5621df500e00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664901 data_alloc: 218103808 data_used: 4919
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1456000 session 0x5621e19468c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1bcec00 session 0x5621e1947a40
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0xcdadfd/0xd71000, compress 0x0/0x0/0x0, omap 0xea36, meta 0x2bc15ca), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 18849792 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 89 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19801c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 90 ms_handle_reset con 0x5621e1bd8c00 session 0x5621e0a22a80
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 18989056 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 91 ms_handle_reset con 0x5621e1456000 session 0x5621e1981180
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.402823448s of 10.174050331s, submitted: 340
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 18792448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 92 ms_handle_reset con 0x5621e1457000 session 0x5621e1946e00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fc2b3000/0x0/0x4ffc00000, data 0xcdeaed/0xd79000, compress 0x0/0x0/0x0, omap 0x108a3, meta 0x2bbf75d), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675212 data_alloc: 218103808 data_used: 21160
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bcec00 session 0x5621e19bf880
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 18677760 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19db880
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677676 data_alloc: 218103808 data_used: 21160
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 94 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947340
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 95 ms_handle_reset con 0x5621e1456000 session 0x5621e1981500
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c02400 session 0x5621e196b6c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c03800 session 0x5621e1991c00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.875562668s of 17.118930817s, submitted: 126
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1457000 session 0x5621e0061880
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc2a3000/0x0/0x4ffc00000, data 0xce568d/0xd87000, compress 0x0/0x0/0x0, omap 0x11946, meta 0x2bbe6ba), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1bcec00 session 0x5621e0061500
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 99 ms_handle_reset con 0x5621e1456000 session 0x5621dfe81a40
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698132 data_alloc: 218103808 data_used: 29317
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 18628608 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0xce82c1/0xd90000, compress 0x0/0x0/0x0, omap 0x11fd8, meta 0x2bbe028), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 18546688 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1bd8800 session 0x5621e0a22380
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1c03800 session 0x5621e195b180
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705238 data_alloc: 218103808 data_used: 33413
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 18522112 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.190230370s of 10.334465027s, submitted: 49
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 18407424 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561800 session 0x5621df8edc00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561400 session 0x5621e19bfc00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 102 ms_handle_reset con 0x5621e1456000 session 0x5621e1980000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 18178048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 103 ms_handle_reset con 0x5621e1bd8800 session 0x5621e199fc00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 104 ms_handle_reset con 0x5621e1c03800 session 0x5621e077a1c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717689 data_alloc: 218103808 data_used: 33413
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717945 data_alloc: 218103808 data_used: 34639
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621df8ec000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561000 session 0x5621e1980fc0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561800 session 0x5621e1946a80
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e038ddc0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621e196b340
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621e038dc00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 18087936 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.056212425s of 10.232484818s, submitted: 119
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e19be000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e05b8000
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 18096128 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf340
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621df8ed880
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e1980c40
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726131 data_alloc: 218103808 data_used: 35205
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc284000/0x0/0x4ffc00000, data 0xcf0a6f/0xda6000, compress 0x0/0x0/0x0, omap 0x13c81, meta 0x2bbc37f), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3560800 session 0x5621e070c380
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1456000 session 0x5621deefbdc0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1c03800 session 0x5621e0061340
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19ee1c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 17776640 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0xcf207e/0xdaa000, compress 0x0/0x0/0x0, omap 0x13ff0, meta 0x2bbc010), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e3561400 session 0x5621deefbc00
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 107 ms_handle_reset con 0x5621e3560400 session 0x5621dfe81880
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1456000 session 0x5621e1946380
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737866 data_alloc: 218103808 data_used: 35783
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1bd8800 session 0x5621e1462540
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1c03800 session 0x5621e0a23500
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3561400 session 0x5621e19db500
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.728271484s of 10.840334892s, submitted: 55
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3560000 session 0x5621e19be1c0
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e1456000 session 0x5621e196ba40
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf62c5/0xdb3000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3561800 session 0x5621e070cc40
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3560c00 session 0x5621e0a23340
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf6283/0xdb2000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 740491 data_alloc: 218103808 data_used: 36295
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 110 ms_handle_reset con 0x5621e1bd8800 session 0x5621dfe81500
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc27a000/0x0/0x4ffc00000, data 0xcf784b/0xdb2000, compress 0x0/0x0/0x0, omap 0x15133, meta 0x2bbaecd), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 17760256 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e196b180
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1c02400 session 0x5621e1981340
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1456000 session 0x5621df8ec380
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e077a700
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0xcf8ce4/0xdb3000, compress 0x0/0x0/0x0, omap 0x15543, meta 0x2bbaabd), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737991 data_alloc: 218103808 data_used: 32777
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 112 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf180
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0xcfa2f2/0xdb5000, compress 0x0/0x0/0x0, omap 0x15a01, meta 0x2bba5ff), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0xcfb7be/0xdb8000, compress 0x0/0x0/0x0, omap 0x15c9a, meta 0x2bba366), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743972 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.406369209s of 13.619614601s, submitted: 155
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 17743872 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}'
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'config show' '{prefix=config show}'
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:14 np0005580781 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}'
Jan 10 12:23:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:23:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 12:23:14 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100945430' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 12:23:14 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14754 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:15 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:15 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 12:23:15 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173986491' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 12:23:15 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:15 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 12:23:15 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018602978' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 12:23:16 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 12:23:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464268086' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 12:23:16 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:23:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 12:23:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045665878' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 12:23:16 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14772 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:23:17 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 10 12:23:17 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/463428898' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 10 12:23:17 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14776 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:23:17 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14780 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:23:18 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.14782 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:23:18 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:23:18.322+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:23:18 np0005580781 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:23:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 10 12:23:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694601461' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 10 12:23:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:23:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 10 12:23:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395341979' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 10 12:23:18 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 10 12:23:18 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037386076' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000022
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000022
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000029
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000014
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000143 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000143 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000175 1 0.000074
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000258 1 0.000072
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000131 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000121 1 0.000059
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000159 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000182 1 0.000056
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000018
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000166 1 0.000086
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000128 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000067
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.044729 15 0.000101
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.049968 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.050040 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.050106 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955799103s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796836853s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] exit Reset 0.000086 1 0.000173
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.955755234s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796836853s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.349070 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.356212 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.836077 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836105 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650593758s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491889954s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] exit Reset 0.000063 1 0.000101
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650562286s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491889954s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.349443 1 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.356491 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.836540 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836574 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650234222s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491912842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] exit Reset 0.000057 1 0.000112
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.650205612s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491912842s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.045693 15 0.000066
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.051057 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.051116 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.051146 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954932213s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796813965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] exit Reset 0.000043 1 0.000082
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.954910278s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796813965s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014383 2 0.000120
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.349955 1 0.000032
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357106 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.837218 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.837260 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649713516s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491882324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] exit Reset 0.000047 1 0.000102
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649688721s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491882324s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.350028 1 0.000044
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357198 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.835981 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649621010s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492034912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] exit Reset 0.000055 1 0.000098
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] exit Start 0.000041 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649598122s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492034912s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.350224 1 0.000049
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357398 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.836521 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.836550 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649418831s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492172241s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] exit Reset 0.000130 1 0.000169
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.649394989s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492172241s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.350774 1 0.000089
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.357837 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.837228 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.837256 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648921013s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.491943359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] exit Reset 0.000068 1 0.000092
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648898125s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.491943359s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.047242 15 0.000093
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.052893 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.052954 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.052975 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953218460s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796546936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] exit Reset 0.000058 1 0.000087
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953183174s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796546936s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.351080 1 0.000052
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.358116 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.837470 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.837492 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648669243s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Reset 0.000043 1 0.000070
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648649216s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.047419 15 0.000082
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.052809 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053040 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053156 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953178406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796897888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] exit Reset 0.000039 1 0.000064
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.953158379s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796897888s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.048026 15 0.000068
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.053540 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.053577 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.053597 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952614784s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796539307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] exit Reset 0.000052 1 0.000100
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952584267s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796539307s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.351703 1 0.000023
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.358780 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.839052 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.839076 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.648002625s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492103577s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] exit Reset 0.000041 1 0.000067
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.647982597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492103577s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.048458 15 0.000058
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.054034 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.054073 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.054093 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952114105s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796447754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] exit Reset 0.000046 1 0.000076
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.952090263s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796447754s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018216 2 0.000111
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354095 1 0.000055
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.361195 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.840885 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.840912 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.050784 15 0.000080
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.056326 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.056380 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645599365s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492210388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.056444 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] exit Reset 0.000132 1 0.000138
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645558357s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492210388s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949820518s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796524048s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] exit Reset 0.000086 1 0.000175
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949773788s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796524048s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354406 1 0.000025
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.361472 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841110 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841133 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645345688s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492187500s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] exit Reset 0.000051 1 0.000080
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.050258 15 0.000163
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.056758 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.056805 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645316124s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492187500s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.056830 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949461937s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Reset 0.000057 1 0.000102
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354125 1 0.000167
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949433327s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.361621 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.840828 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841768 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645701408s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492774963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] exit Reset 0.000039 1 0.000091
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.645682335s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492774963s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.051271 15 0.000169
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.057080 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.057181 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.057209 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949236870s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796409607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.051652 15 0.000211
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.057274 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.057343 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.057373 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948904037s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796211243s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] exit Reset 0.000047 1 0.000091
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948876381s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796211243s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354980 1 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362050 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.840599 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.840628 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644659996s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492225647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Reset 0.000047 1 0.000094
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644639015s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492225647s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.051462 15 0.000139
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.056937 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058192 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058218 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949141502s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796890259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] exit Reset 0.000036 1 0.000073
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949123383s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796890259s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.354990 1 0.000058
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362001 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841285 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841301 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644832611s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492797852s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] exit Reset 0.000045 1 0.000091
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644811630s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492797852s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.052012 15 0.000149
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.057708 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058206 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058312 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948488235s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796607971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] exit Reset 0.000034 1 0.000067
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.948470116s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796607971s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.355422 1 0.000066
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362574 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841566 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841587 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644400597s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492721558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] exit Reset 0.000031 1 0.000053
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644385338s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492721558s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.053036 15 0.000101
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.058658 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058713 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058736 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947714806s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796150208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] exit Reset 0.000058 1 0.000079
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947671890s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796150208s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.355647 1 0.000146
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.362769 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.841797 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.841818 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644159317s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492759705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] exit Reset 0.000032 1 0.000053
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.644144058s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492759705s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.053118 15 0.000209
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.058534 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.058589 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.058689 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947475433s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 91.796226501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] exit Reset 0.000037 1 0.000097
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.947455406s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796226501s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020279 2 0.000031
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020091 2 0.000031
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019579 2 0.000044
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019380 2 0.000029
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020092 2 0.000050
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=0 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000023
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000328
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019982 2 0.000031
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.357547 1 0.000025
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.364576 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.843615 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.843656 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642189980s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 90.492256165s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] exit Reset 0.000067 1 0.000141
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44 pruub=11.642154694s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 90.492256165s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000022
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000190 1 0.000040
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Reset 0.000107 1 0.000141
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] exit Start 0.000055 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.949159622s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 91.796409607s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000705 1 0.000080
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000127 1 0.000053
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000104 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000025
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000097 1 0.000043
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000099 1 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000095 1 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000155 1 0.000065
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000089 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000046
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000111 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000020
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000118 1 0.000035
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000107 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000022
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000039
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000044
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000102 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000030
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000030
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000088 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000020
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000112 1 0.000052
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000054
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=0 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000027
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000044
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.027201 2 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.026016 2 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.026106 2 0.000024
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024975 2 0.000026
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000173 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023887 2 0.000079
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024971 2 0.000040
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024799 2 0.000026
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023450 2 0.000061
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021282 2 0.000061
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020790 2 0.000088
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020386 2 0.000113
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019940 2 0.000054
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000019
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000134 1 0.000078
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.025609 2 0.000126
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024180 2 0.000052
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012109 2 0.000165
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010481 2 0.000041
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.010338 2 0.000050
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009765 2 0.000066
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010374 2 0.000044
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.009613 2 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.008808 2 0.000040
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008598 2 0.000058
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008392 2 0.000053
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008167 2 0.000043
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011967 2 0.000056
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007741 2 0.000062
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012523 2 0.000054
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.008144 2 0.000030
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.007287 2 0.000039
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006787 2 0.000035
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007129 2 0.000035
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.003309 2 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007851 2 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014713 2 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 44 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122323 2 0.000080
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.132335 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122844 2 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133483 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122186 2 0.000131
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.131238 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122819 2 0.000187
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.133324 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122332 2 0.000099
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122110 2 0.000040
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.132133 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130631 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122349 2 0.000043
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.131066 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122209 2 0.000038
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130068 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122041 2 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.129492 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122347 2 0.000040
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130655 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122165 2 0.000095
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121965 2 0.000081
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.130430 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129274 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122003 2 0.000310
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.130006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121945 2 0.000157
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.125552 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122197 2 0.000035
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123792 2 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.129128 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.136146 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122627 2 0.000055
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.134775 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.136635 2 0.000056
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.156750 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123235 2 0.000077
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.133774 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.137651 2 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.157132 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.137813 2 0.000027
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.157527 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.137735 2 0.000025
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.157939 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.138048 2 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.158300 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.138143 2 0.000039
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.158547 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123102 2 0.000041
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.136399 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.140854 2 0.000064
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.159274 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127599 2 0.000051
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.154911 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127388 2 0.000194
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.125790 2 0.001323
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.150283 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.153751 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127454 2 0.000058
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152555 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127591 2 0.000031
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127674 2 0.000242
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127623 2 0.000079
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.153986 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152489 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152722 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.126500 2 0.000050
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.152227 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123581 2 0.000055
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.138443 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127670 2 0.000098
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.151365 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127788 2 0.000167
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.151985 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127778 2 0.000043
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148801 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.146321 2 0.000071
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.161075 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127973 2 0.000174
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.149508 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.127920 2 0.000070
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148021 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.128031 2 0.000088
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.148636 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002992 4 0.000123
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003725 4 0.000695
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.144046 7 0.000311
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008979 4 0.000088
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.009193 5 0.000185
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010219 4 0.000332
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.010174 5 0.000291
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.010183 5 0.000350
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010124 4 0.000056
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010106 4 0.000059
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010050 4 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.001243 1 0.000074
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.010257 5 0.000331
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010184 4 0.000282
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.010115 5 0.000156
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010046 4 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010077 4 0.000065
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009955 4 0.000077
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009946 4 0.000036
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009891 4 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009813 4 0.000065
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009827 4 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009635 4 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009586 4 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009460 4 0.000388
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009377 4 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009405 4 0.000091
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009384 4 0.000049
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009048 4 0.000077
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009063 4 0.000110
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010346 4 0.000055
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009115 4 0.000062
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009043 4 0.000055
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009040 4 0.000043
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009029 4 0.000049
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=35/17 lis/c=44/35 les/c/f=45/36/0 sis=44) [1] r=0 lpr=44 pi=[35,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008968 4 0.000071
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008899 4 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008945 4 0.000056
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011254 5 0.001322
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009084 4 0.000056
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009069 4 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009036 4 0.000035
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011637 4 0.002788
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011569 4 0.002735
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=44/37 les/c/f=45/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.152598 7 0.000051
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.168352 7 0.000112
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.170051 7 0.000041
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.170536 7 0.000041
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.169394 7 0.000050
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153978 7 0.000055
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158863 7 0.000093
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.156982 7 0.000097
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.150645 7 0.003536
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.154534 7 0.000071
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.157704 7 0.000083
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153469 7 0.000069
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158705 7 0.000056
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158133 7 0.000083
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.159603 7 0.000065
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.158443 7 0.000087
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.168436 7 0.000078
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153954 7 0.000078
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.153785 7 0.000064
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.159850 7 0.000069
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008645 1 0.000038
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.008770 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.152858 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.175597 7 0.000041
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174688 7 0.000043
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.174384 7 0.000100
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173914 7 0.000076
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.176573 7 0.000041
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.173930 7 0.000059
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163743 7 0.000065
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.166482 7 0.000079
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161014 7 0.000088
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.166663 7 0.000093
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161283 7 0.000090
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.164316 7 0.000062
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161261 7 0.000066
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.166461 7 0.000062
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163189 7 0.000104
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.161499 7 0.000063
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163471 7 0.000071
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.163762 7 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.168054 7 0.000074
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.165554 7 0.000094
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.291023 1 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.291349 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409798 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125822 1 0.000128
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.417378 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.148772 1 0.000057
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.566021 1 0.000022
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.139713 1 0.000116
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.705941 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.073770 1 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.778903 1 0.000096
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 lc 33'21 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069879 1 0.000136
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/39 les/c/f=45/42/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845419 1 0.000032
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845580 1 0.000019
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845766 1 0.000013
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845826 1 0.000013
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845944 1 0.000013
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846007 1 0.000016
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846055 1 0.000014
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846143 1 0.000014
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846186 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846200 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846286 1 0.000035
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846361 1 0.000016
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846433 1 0.000017
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846507 1 0.000018
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846606 1 0.000016
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846689 1 0.000015
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846767 1 0.000019
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846838 1 0.000016
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846904 1 0.000020
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846976 1 0.000187
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842443 1 0.000043
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842540 1 0.000012
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842237 1 0.000087
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842195 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842059 1 0.000031
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842129 1 0.000052
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841990 1 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841811 1 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841797 1 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841551 1 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841596 1 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841423 1 0.000026
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841424 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841174 1 0.000032
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838555 1 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838372 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838135 1 0.000039
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838102 1 0.000027
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.838117 1 0.000062
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.837948 1 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009748 1 0.000131
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.855253 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1b( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.007891 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.015062 1 0.000130
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.860742 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.029125 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022005 1 0.000030
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.867812 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.13( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.037886 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 2023424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029245 1 0.000078
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.875124 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.045683 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036517 1 0.000024
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.882490 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.051910 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044002 1 0.000023
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.890041 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.044053 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051260 1 0.000067
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.897356 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.056281 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058509 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.904688 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.061707 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.065843 1 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.912074 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.062825 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073151 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.919387 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.18( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.072891 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080508 1 0.000032
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.926831 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.084594 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.087763 1 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.934165 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.6( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.092904 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095141 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.941607 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.099772 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102769 1 0.000054
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.949348 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.108985 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.109882 1 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.956538 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.4( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.115014 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117287 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.964022 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.132492 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124701 1 0.000027
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.971519 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1f( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.125510 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.131697 1 0.000044
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.978571 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.132389 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139163 1 0.000045
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.986126 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.9( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.146015 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146381 1 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.993550 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.3( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.148141 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153658 1 0.000031
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.996138 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.171774 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.160944 1 0.000050
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.003523 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.178234 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168206 1 0.000037
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.010488 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.11( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.184936 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175559 1 0.000027
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.017790 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.15( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.191726 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182981 1 0.000156
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.025108 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.201717 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190076 1 0.000029
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.032244 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.206217 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197382 1 0.000025
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.039402 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.5( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.203190 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204759 1 0.000049
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.046600 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.213131 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212064 1 0.000027
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.053899 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.2( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.214963 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.219566 1 0.000026
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.061169 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.227895 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226746 1 0.000025
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.068403 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.229727 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.234315 1 0.000026
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075799 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.240158 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241365 1 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.082821 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.c( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.244114 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248715 1 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089920 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.8( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.256426 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256375 1 0.000024
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.094982 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.258219 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.263420 1 0.000073
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.101847 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.e( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.263389 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270640 1 0.000061
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.108856 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.272372 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277952 1 0.000028
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.116094 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1a( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.279889 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.285279 1 0.000083
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.123459 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[7.1( empty lb MIN local-lis/les=40/43 n=0 ec=40/25 lis/c=40/40 les/c/f=43/43/0 sis=44) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.291569 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 DELETING pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.292478 1 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.130472 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 45 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=44) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.296073 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 1957888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 1933312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 45 heartbeat osd_stat(store_statfs(0x4fe0e2000/0x0/0x4ffc00000, data 0x9f159/0xea000, compress 0x0/0x0/0x0, omap 0x742b, meta 0x1a28bd5), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 1925120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1867776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 374176 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000120 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000029
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000638 1 0.000477
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000095 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000033
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000095
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000021
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000034
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000659 2 0.000093
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001536 2 0.000093
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001091 2 0.000032
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002306 2 0.000106
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 46 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1835008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 46 handle_osd_map epochs [46,47], i have 47, src has [1,47]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 47 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564181 2 0.000247
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.565547 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564003 2 0.000140
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.567065 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564473 2 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.565296 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.564612 2 0.000170
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.566409 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001727 4 0.000187
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002935 4 0.000149
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002644 4 0.000223
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002959 5 0.000224
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000164 1 0.000070
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007928 2 0.000184
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.008205 1 0.000087
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.074415 1 0.000198
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000031 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 47 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/42/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0xa076f/0xed000, compress 0x0/0x0/0x0, omap 0x76a0, meta 0x1a28960), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1835008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 47 handle_osd_map epochs [47,48], i have 47, src has [1,48]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.003501892s of 10.250342369s, submitted: 423
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.292675 7 0.000154
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.082731 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.208309 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.208363 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926258087s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995513916s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.222692 7 0.000119
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.082947 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.213400 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] exit Reset 0.000093 1 0.000162
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.213425 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.926201820s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995513916s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925914764s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995262146s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] exit Reset 0.000067 1 0.000102
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925878525s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995262146s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.507078 7 0.000163
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.083769 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.215024 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.215050 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925272942s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.995002747s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] exit Reset 0.000057 1 0.000084
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.925237656s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.995002747s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 6.782265 7 0.000151
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 7.083947 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 8.216096 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 8.216127 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924705505s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 active pruub 95.994689941s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] exit Reset 0.000085 1 0.000117
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 48 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=8.924655914s) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY pruub 95.994689941s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 48 handle_osd_map epochs [48,48], i have 48, src has [1,48]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 1810432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.021868 7 0.000091
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.022355 7 0.000128
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.022497 7 0.000089
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.021821 7 0.000137
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.014557 2 0.000083
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.014593 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000164 1 0.000086
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 49 heartbeat osd_stat(store_statfs(0x4fe0d3000/0x0/0x4ffc00000, data 0xa3205/0xf3000, compress 0x0/0x0/0x0, omap 0x7d8d, meta 0x1a28273), peers [0,2] op hist [0,0,0,0,1])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.121421 2 0.000266
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.121633 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.158666 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138388 2 0.000173
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.138457 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000147 1 0.000118
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.137878 2 0.000374
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.138165 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.298630 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.345136 2 0.000055
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.345198 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000117 1 0.000153
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.405514 2 0.000089
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.405576 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000181 1 0.000163
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.079181 2 0.000236
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.079371 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.446454 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.033219 2 0.000160
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.033468 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 49 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=48) [0] r=-1 lpr=48 pi=[44,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.461635 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 1802240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000119 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000036
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000251 1 0.000062
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000616 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=0 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000035
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000154 1 0.000083
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002300 2 0.001261
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.001238 2 0.000077
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 50 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 1736704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 50 handle_osd_map epochs [51,51], i have 51, src has [1,51]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012166 2 0.000114
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012171 2 0.000112
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.013684 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.015281 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/42 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=39/39 les/c/f=42/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002154 4 0.000253
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000094 1 0.000084
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002297 4 0.000295
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007621 2 0.000065
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.007694 2 0.000073
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 lc 33'15 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395490 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.252192 1 0.000111
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000034 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 51 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=50/51 n=2 ec=39/23 lis/c=50/39 les/c/f=51/42/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0xa74e9/0xfc000, compress 0x0/0x0/0x0, omap 0x8978, meta 0x1a27688), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0xa74e9/0xfc000, compress 0x0/0x0/0x0, omap 0x8978, meta 0x1a27688), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 399912 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 15.298156 21 0.000176
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 16.014371 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 17.143886 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 17.143933 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.995003700s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995697021s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] exit Reset 0.000092 1 0.000152
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994950294s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995697021s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 15.587230 21 0.000161
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 16.014908 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 17.148261 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 17.148307 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.994361877s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 active pruub 111.995353699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] exit Reset 0.001005 1 0.001088
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 52 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=15.993425369s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY pruub 111.995353699s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.936134 6 0.000102
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.937499 6 0.000063
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.079444 3 0.000080
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.079469 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000104 1 0.000048
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.885927200s of 10.011228561s, submitted: 62
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 DELETING pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.134527 2 0.000316
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.134692 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.150356 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.224618 3 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.224662 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000126 1 0.000086
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 DELETING pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.017283 2 0.000294
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.017476 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 53 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=2 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=-1 lpr=52 pi=[44,52)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.179681 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fe0ca000/0x0/0x4ffc00000, data 0xa9edf/0x100000, compress 0x0/0x0/0x0, omap 0x8f85, meta 0x1a2707b), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407557 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fe0c5000/0x0/0x4ffc00000, data 0xab4f5/0x103000, compress 0x0/0x0/0x0, omap 0x922d, meta 0x1a26dd3), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 516096 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 507904 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 491520 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 475136 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 412276 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 442368 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 56 heartbeat osd_stat(store_statfs(0x4fe0bf000/0x0/0x4ffc00000, data 0xae121/0x109000, compress 0x0/0x0/0x0, omap 0x9744, meta 0x1a268bc), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 56 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 425984 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177284241s of 10.220639229s, submitted: 17
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.306095 40 0.000408
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.316440 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 28.445729 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 28.445766 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692907333s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 active pruub 119.995796204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] exit Reset 0.000124 1 0.000211
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 58 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=12.692836761s) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 119.995796204s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 58 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 376832 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.695591 6 0.000110
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001112 2 0.000101
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 DELETING pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003949 1 0.000062
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.005134 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 59 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=44/45 n=1 ec=39/23 lis/c=44/44 les/c/f=45/45/0 sis=58) [0] r=-1 lpr=58 pi=[44,58)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.700796 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.436269 39 0.000211
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.438120 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 24.003692 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 24.003729 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=46) [1] r=0 lpr=46 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564128876s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 active pruub 118.069427490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] exit Reset 0.000153 1 0.000246
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] state<Start>: transitioning to Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] exit Start 0.000021 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 60 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.564027786s) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 118.069427490s@ mbc={}] enter Started/Stray
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 1400832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 60 heartbeat osd_stat(store_statfs(0x4fe0b3000/0x0/0x4ffc00000, data 0xb364d/0x115000, compress 0x0/0x0/0x0, omap 0xa15f, meta 0x1a25ea1), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432423 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.987625 7 0.000183
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000141 1 0.000112
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 DELETING pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003077 1 0.000088
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.003301 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 61 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=46/47 n=1 ec=39/23 lis/c=46/46 les/c/f=47/47/0 sis=60) [0] r=-1 lpr=60 pi=[46,60)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.991026 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=0 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000159 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=0 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000052
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000300 1 0.000072
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000587 2 0.000119
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 62 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1376256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.801783 2 0.000081
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.802768 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002677 3 0.000184
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000076 1 0.000050
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007659 3 0.000047
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 63 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=62/63 n=1 ec=39/23 lis/c=62/48 les/c/f=63/49/0 sis=62) [1] r=0 lpr=62 pi=[48,62)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1368064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1359872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444204 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d(unlocked)] enter Initial
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=0 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000103 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=0 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000030
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000464 1 0.000042
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000953 2 0.000058
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 64 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0ac000/0x0/0x4ffc00000, data 0xb7701/0x11e000, compress 0x0/0x0/0x0, omap 0xa965, meta 0x1a2569b), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 64 handle_osd_map epochs [64,65], i have 65, src has [1,65]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.643723 2 0.000060
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.645228 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002158 4 0.000286
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000192 1 0.000124
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 lc 33'13 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067784 2 0.000098
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 pg_epoch: 65 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=64/65 n=1 ec=39/23 lis/c=64/52 les/c/f=65/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a3000/0x0/0x4ffc00000, data 0xba1dd/0x125000, compress 0x0/0x0/0x0, omap 0xafb1, meta 0x1a2504f), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a3000/0x0/0x4ffc00000, data 0xba1dd/0x125000, compress 0x0/0x0/0x0, omap 0xafb1, meta 0x1a2504f), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.386300087s of 10.565129280s, submitted: 38
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 461170 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.835360527s of 14.069359779s, submitted: 7
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471536 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 1228800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473949 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.012902260s of 11.086735725s, submitted: 4
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1130496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026464462s of 12.046990395s, submitted: 8
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 488419 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490830 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.865109444s of 13.878160477s, submitted: 6
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 493243 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 495656 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498069 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.043815613s of 14.055473328s, submitted: 6
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500482 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.805700302s of 12.849143982s, submitted: 6
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 510130 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512541 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959676743s of 10.976650238s, submitted: 6
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522187 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 737280 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.874062538s of 15.986262321s, submitted: 10
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 531833 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 534244 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.020789146s of 13.038912773s, submitted: 8
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 536655 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539068 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 541481 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.103586197s of 15.117080688s, submitted: 6
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543894 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546305 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548718 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.009474754s of 12.021146774s, submitted: 6
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 555951 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.718189240s of 13.738556862s, submitted: 8
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 565595 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 16.66 MB, 0.03 MB/s#012Interval WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:23:19 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:27:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:27:51 np0005580781 rsyslogd[1006]: imjournal: 15364 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 10 12:27:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:27:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:27:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:27:55 np0005580781 nova_compute[237049]: 2026-01-10 17:27:55.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:27:55 np0005580781 nova_compute[237049]: 2026-01-10 17:27:55.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:27:55 np0005580781 nova_compute[237049]: 2026-01-10 17:27:55.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:27:55 np0005580781 nova_compute[237049]: 2026-01-10 17:27:55.374 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:27:55 np0005580781 nova_compute[237049]: 2026-01-10 17:27:55.376 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:27:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:27:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:27:59 np0005580781 nova_compute[237049]: 2026-01-10 17:27:59.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:27:59 np0005580781 nova_compute[237049]: 2026-01-10 17:27:59.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:27:59 np0005580781 nova_compute[237049]: 2026-01-10 17:27:59.402 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:27:59 np0005580781 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:27:59 np0005580781 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:27:59 np0005580781 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:27:59 np0005580781 nova_compute[237049]: 2026-01-10 17:27:59.403 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:27:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:27:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:27:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937330844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.005 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.258 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.261 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.261 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.262 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.364 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.365 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.397 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:28:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:28:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636035365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.956 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.966 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.984 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.987 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:28:00 np0005580781 nova_compute[237049]: 2026-01-10 17:28:00.987 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:28:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:02 np0005580781 nova_compute[237049]: 2026-01-10 17:28:02.977 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:02 np0005580781 nova_compute[237049]: 2026-01-10 17:28:02.978 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:03 np0005580781 nova_compute[237049]: 2026-01-10 17:28:03.006 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:03 np0005580781 nova_compute[237049]: 2026-01-10 17:28:03.006 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:03 np0005580781 nova_compute[237049]: 2026-01-10 17:28:03.006 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:28:03 np0005580781 nova_compute[237049]: 2026-01-10 17:28:03.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:05 np0005580781 nova_compute[237049]: 2026-01-10 17:28:05.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:28:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:28:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:28:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:28:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:28:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:28:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:28:16 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:28:17 np0005580781 podman[253564]: 2026-01-10 17:28:17.410005764 +0000 UTC m=+0.063229614 container create 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 10 12:28:17 np0005580781 systemd[1]: Started libpod-conmon-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope.
Jan 10 12:28:17 np0005580781 podman[253564]: 2026-01-10 17:28:17.380878813 +0000 UTC m=+0.034102673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:28:17 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:28:17 np0005580781 podman[253564]: 2026-01-10 17:28:17.515491949 +0000 UTC m=+0.168715829 container init 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 10 12:28:17 np0005580781 podman[253564]: 2026-01-10 17:28:17.526315985 +0000 UTC m=+0.179539835 container start 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 10 12:28:17 np0005580781 podman[253564]: 2026-01-10 17:28:17.530890824 +0000 UTC m=+0.184114724 container attach 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:28:17 np0005580781 gifted_montalcini[253580]: 167 167
Jan 10 12:28:17 np0005580781 systemd[1]: libpod-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope: Deactivated successfully.
Jan 10 12:28:17 np0005580781 conmon[253580]: conmon 8b3ce6910b4050383419 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope/container/memory.events
Jan 10 12:28:17 np0005580781 podman[253585]: 2026-01-10 17:28:17.594190059 +0000 UTC m=+0.036221193 container died 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 12:28:17 np0005580781 systemd[1]: var-lib-containers-storage-overlay-0eeeaff61198cacb7be1d7252b44629cb40eb2777edbf12b6bc67b8453e0372a-merged.mount: Deactivated successfully.
Jan 10 12:28:17 np0005580781 podman[253585]: 2026-01-10 17:28:17.629480844 +0000 UTC m=+0.071511948 container remove 8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:28:17 np0005580781 systemd[1]: libpod-conmon-8b3ce6910b405038341908f2dfb91ee3fba607416608fea8f3b2eceea62761c0.scope: Deactivated successfully.
Jan 10 12:28:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:28:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:17 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:28:17 np0005580781 podman[253607]: 2026-01-10 17:28:17.867484687 +0000 UTC m=+0.070065077 container create 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:28:17 np0005580781 systemd[1]: Started libpod-conmon-62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298.scope.
Jan 10 12:28:17 np0005580781 podman[253607]: 2026-01-10 17:28:17.836480363 +0000 UTC m=+0.039060803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:28:17 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:28:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:17 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:17 np0005580781 podman[253607]: 2026-01-10 17:28:17.991498875 +0000 UTC m=+0.194079285 container init 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 10 12:28:18 np0005580781 podman[253607]: 2026-01-10 17:28:18.006867059 +0000 UTC m=+0.209447449 container start 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 12:28:18 np0005580781 podman[253607]: 2026-01-10 17:28:18.013809185 +0000 UTC m=+0.216389605 container attach 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:28:18 np0005580781 great_goldstine[253624]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:28:18 np0005580781 great_goldstine[253624]: --> All data devices are unavailable
Jan 10 12:28:18 np0005580781 systemd[1]: libpod-62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298.scope: Deactivated successfully.
Jan 10 12:28:18 np0005580781 podman[253607]: 2026-01-10 17:28:18.569270781 +0000 UTC m=+0.771851201 container died 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:28:18 np0005580781 systemd[1]: var-lib-containers-storage-overlay-30f1c5563b2b4c9295bee3a6c0e6cd6121e37db5f00cdb1b2bf6dc7a82c10185-merged.mount: Deactivated successfully.
Jan 10 12:28:18 np0005580781 podman[253607]: 2026-01-10 17:28:18.636916419 +0000 UTC m=+0.839496809 container remove 62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:28:18 np0005580781 systemd[1]: libpod-conmon-62d09bddd0c53be140f38b51e41dd2f51e81deefbb624766cfb125746f251298.scope: Deactivated successfully.
Jan 10 12:28:18 np0005580781 podman[253645]: 2026-01-10 17:28:18.721516736 +0000 UTC m=+0.102235755 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 10 12:28:18 np0005580781 podman[253653]: 2026-01-10 17:28:18.763612322 +0000 UTC m=+0.144445724 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 10 12:28:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:19 np0005580781 podman[253764]: 2026-01-10 17:28:19.127591428 +0000 UTC m=+0.037717335 container create e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 12:28:19 np0005580781 systemd[1]: Started libpod-conmon-e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47.scope.
Jan 10 12:28:19 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:28:19 np0005580781 podman[253764]: 2026-01-10 17:28:19.110909927 +0000 UTC m=+0.021035854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:28:19 np0005580781 podman[253764]: 2026-01-10 17:28:19.211481424 +0000 UTC m=+0.121607411 container init e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 10 12:28:19 np0005580781 podman[253764]: 2026-01-10 17:28:19.220099127 +0000 UTC m=+0.130225044 container start e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:28:19 np0005580781 podman[253764]: 2026-01-10 17:28:19.223848483 +0000 UTC m=+0.133974480 container attach e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 12:28:19 np0005580781 beautiful_tesla[253780]: 167 167
Jan 10 12:28:19 np0005580781 systemd[1]: libpod-e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47.scope: Deactivated successfully.
Jan 10 12:28:19 np0005580781 podman[253764]: 2026-01-10 17:28:19.225572031 +0000 UTC m=+0.135697948 container died e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 12:28:19 np0005580781 systemd[1]: var-lib-containers-storage-overlay-3acdfcf703d7b7c189f4744e3a2b853c564b74b7c1852f50f3a97979327a6ba7-merged.mount: Deactivated successfully.
Jan 10 12:28:19 np0005580781 podman[253764]: 2026-01-10 17:28:19.264371746 +0000 UTC m=+0.174497663 container remove e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 12:28:19 np0005580781 systemd[1]: libpod-conmon-e6357e9e648bf614a2ea8fbd4c8b1f7814825a6ae968c028f113060434e2fb47.scope: Deactivated successfully.
Jan 10 12:28:19 np0005580781 podman[253803]: 2026-01-10 17:28:19.523888476 +0000 UTC m=+0.069247775 container create f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:28:19 np0005580781 systemd[1]: Started libpod-conmon-f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c.scope.
Jan 10 12:28:19 np0005580781 podman[253803]: 2026-01-10 17:28:19.49425401 +0000 UTC m=+0.039613359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:28:19 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:28:19 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:19 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:19 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:19 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:19 np0005580781 podman[253803]: 2026-01-10 17:28:19.631582643 +0000 UTC m=+0.176942002 container init f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 10 12:28:19 np0005580781 podman[253803]: 2026-01-10 17:28:19.64814239 +0000 UTC m=+0.193501659 container start f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 10 12:28:19 np0005580781 podman[253803]: 2026-01-10 17:28:19.654374846 +0000 UTC m=+0.199734145 container attach f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 10 12:28:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]: {
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:    "0": [
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:        {
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "devices": [
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "/dev/loop3"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            ],
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_name": "ceph_lv0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_size": "21470642176",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "name": "ceph_lv0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "tags": {
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cluster_name": "ceph",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.crush_device_class": "",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.encrypted": "0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.objectstore": "bluestore",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osd_id": "0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.type": "block",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.vdo": "0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.with_tpm": "0"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            },
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "type": "block",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "vg_name": "ceph_vg0"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:        }
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:    ],
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:    "1": [
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:        {
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "devices": [
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "/dev/loop4"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            ],
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_name": "ceph_lv1",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_size": "21470642176",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "name": "ceph_lv1",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "tags": {
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cluster_name": "ceph",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.crush_device_class": "",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.encrypted": "0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.objectstore": "bluestore",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osd_id": "1",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.type": "block",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.vdo": "0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.with_tpm": "0"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            },
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "type": "block",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "vg_name": "ceph_vg1"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:        }
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:    ],
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:    "2": [
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:        {
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "devices": [
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "/dev/loop5"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            ],
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_name": "ceph_lv2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_size": "21470642176",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "name": "ceph_lv2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "tags": {
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.cluster_name": "ceph",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.crush_device_class": "",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.encrypted": "0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.objectstore": "bluestore",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osd_id": "2",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.type": "block",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.vdo": "0",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:                "ceph.with_tpm": "0"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            },
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "type": "block",
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:            "vg_name": "ceph_vg2"
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:        }
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]:    ]
Jan 10 12:28:19 np0005580781 flamboyant_cannon[253819]: }
Jan 10 12:28:20 np0005580781 systemd[1]: libpod-f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c.scope: Deactivated successfully.
Jan 10 12:28:20 np0005580781 podman[253803]: 2026-01-10 17:28:20.001345122 +0000 UTC m=+0.546704441 container died f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:28:20 np0005580781 systemd[1]: var-lib-containers-storage-overlay-15876511755c3625d3bce3933f2e286ef97d399128155b849f4434effdc54b0b-merged.mount: Deactivated successfully.
Jan 10 12:28:20 np0005580781 podman[253803]: 2026-01-10 17:28:20.065742809 +0000 UTC m=+0.611102118 container remove f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cannon, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 10 12:28:20 np0005580781 systemd[1]: libpod-conmon-f55640314e94017358116557b090d0b3bd9fe895a7c19e9ff54fbacf1797e72c.scope: Deactivated successfully.
Jan 10 12:28:20 np0005580781 podman[253903]: 2026-01-10 17:28:20.63590702 +0000 UTC m=+0.061402632 container create 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:28:20 np0005580781 systemd[1]: Started libpod-conmon-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope.
Jan 10 12:28:20 np0005580781 podman[253903]: 2026-01-10 17:28:20.614914928 +0000 UTC m=+0.040410550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:28:20 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:28:20 np0005580781 podman[253903]: 2026-01-10 17:28:20.729872981 +0000 UTC m=+0.155368653 container init 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:28:20 np0005580781 podman[253903]: 2026-01-10 17:28:20.739157373 +0000 UTC m=+0.164652975 container start 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:28:20 np0005580781 podman[253903]: 2026-01-10 17:28:20.744553115 +0000 UTC m=+0.170048727 container attach 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 10 12:28:20 np0005580781 gifted_poincare[253920]: 167 167
Jan 10 12:28:20 np0005580781 systemd[1]: libpod-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope: Deactivated successfully.
Jan 10 12:28:20 np0005580781 conmon[253920]: conmon 58249e71bf08426a5d74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope/container/memory.events
Jan 10 12:28:20 np0005580781 podman[253903]: 2026-01-10 17:28:20.746896931 +0000 UTC m=+0.172392563 container died 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:28:20 np0005580781 systemd[1]: var-lib-containers-storage-overlay-d31c834a4af00a30fa32f4275bdc33b7e0b91a8d1fa1ff26f045d3de76e55136-merged.mount: Deactivated successfully.
Jan 10 12:28:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:20 np0005580781 podman[253903]: 2026-01-10 17:28:20.804343061 +0000 UTC m=+0.229838683 container remove 58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_poincare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:28:20 np0005580781 systemd[1]: libpod-conmon-58249e71bf08426a5d74ef17d52bd707457f199d5da5bff221a760bac3ac9717.scope: Deactivated successfully.
Jan 10 12:28:21 np0005580781 podman[253944]: 2026-01-10 17:28:21.037026744 +0000 UTC m=+0.064160321 container create 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:28:21 np0005580781 systemd[1]: Started libpod-conmon-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope.
Jan 10 12:28:21 np0005580781 podman[253944]: 2026-01-10 17:28:21.010823405 +0000 UTC m=+0.037957062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:28:21 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:28:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:21 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:28:21 np0005580781 podman[253944]: 2026-01-10 17:28:21.142153849 +0000 UTC m=+0.169287466 container init 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:28:21 np0005580781 podman[253944]: 2026-01-10 17:28:21.156742711 +0000 UTC m=+0.183876318 container start 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:28:21 np0005580781 podman[253944]: 2026-01-10 17:28:21.161070803 +0000 UTC m=+0.188204430 container attach 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:28:21 np0005580781 lvm[254039]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:28:21 np0005580781 lvm[254039]: VG ceph_vg1 finished
Jan 10 12:28:21 np0005580781 lvm[254038]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:28:21 np0005580781 lvm[254038]: VG ceph_vg0 finished
Jan 10 12:28:21 np0005580781 lvm[254041]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:28:21 np0005580781 lvm[254041]: VG ceph_vg2 finished
Jan 10 12:28:21 np0005580781 epic_tharp[253960]: {}
Jan 10 12:28:22 np0005580781 systemd[1]: libpod-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope: Deactivated successfully.
Jan 10 12:28:22 np0005580781 podman[253944]: 2026-01-10 17:28:22.026510903 +0000 UTC m=+1.053644470 container died 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 12:28:22 np0005580781 systemd[1]: libpod-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope: Consumed 1.353s CPU time.
Jan 10 12:28:22 np0005580781 systemd[1]: var-lib-containers-storage-overlay-5092a2192ccb86d00e323b466ae9438ca10763330674827041aaecc370ee55c5-merged.mount: Deactivated successfully.
Jan 10 12:28:22 np0005580781 podman[253944]: 2026-01-10 17:28:22.081212506 +0000 UTC m=+1.108346113 container remove 989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 12:28:22 np0005580781 systemd[1]: libpod-conmon-989ba271444b11adbe746ec21addbeb2f286deeeb2b40b160111ed50f3f0bfa9.scope: Deactivated successfully.
Jan 10 12:28:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:28:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:28:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:28:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:28:29 np0005580781 ceph-osd[85764]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5552 writes, 23K keys, 5552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5552 writes, 988 syncs, 5.62 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1173 writes, 3350 keys, 1173 commit groups, 1.0 writes per commit group, ingest: 1.88 MB, 0.00 MB/s#012Interval WAL: 1173 writes, 520 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 12:28:31 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:28:35 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 6001 writes, 24K keys, 6001 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6001 writes, 1157 syncs, 5.19 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1449 writes, 4204 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 2.27 MB, 0.00 MB/s#012Interval WAL: 1449 writes, 642 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 12:28:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:28:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/763008534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:28:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:28:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/763008534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:28:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:28:38
Jan 10 12:28:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:28:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:28:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms']
Jan 10 12:28:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:28:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:28:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:28:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:28:42 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6272 writes, 24K keys, 6272 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6272 writes, 1344 syncs, 4.67 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2050 writes, 5141 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s#012Interval WAL: 2050 writes, 951 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:28:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:28:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:48 np0005580781 ceph-mgr[75538]: [devicehealth INFO root] Check health
Jan 10 12:28:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:28:48.947 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:28:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:28:48.948 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:28:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:28:48.949 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:28:49 np0005580781 podman[254082]: 2026-01-10 17:28:49.086663608 +0000 UTC m=+0.073526252 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 10 12:28:49 np0005580781 podman[254083]: 2026-01-10 17:28:49.177720235 +0000 UTC m=+0.166104543 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 10 12:28:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:28:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:56 np0005580781 nova_compute[237049]: 2026-01-10 17:28:56.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:56 np0005580781 nova_compute[237049]: 2026-01-10 17:28:56.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:28:56 np0005580781 nova_compute[237049]: 2026-01-10 17:28:56.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:28:56 np0005580781 nova_compute[237049]: 2026-01-10 17:28:56.371 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:28:56 np0005580781 nova_compute[237049]: 2026-01-10 17:28:56.372 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:28:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:28:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.378 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.378 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.378 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:29:01 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:29:01 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205175263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:29:01 np0005580781 nova_compute[237049]: 2026-01-10 17:29:01.995 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.223 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.226 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.226 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.227 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.311 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.312 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.331 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:29:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:29:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670586627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.929 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.938 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.954 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.956 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:29:02 np0005580781 nova_compute[237049]: 2026-01-10 17:29:02.956 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:29:03 np0005580781 nova_compute[237049]: 2026-01-10 17:29:03.946 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:03 np0005580781 nova_compute[237049]: 2026-01-10 17:29:03.947 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:03 np0005580781 nova_compute[237049]: 2026-01-10 17:29:03.948 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:06 np0005580781 nova_compute[237049]: 2026-01-10 17:29:06.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:29:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:29:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:29:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:29:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:29:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:29:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:20 np0005580781 podman[254173]: 2026-01-10 17:29:20.094506327 +0000 UTC m=+0.081347258 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 10 12:29:20 np0005580781 podman[254174]: 2026-01-10 17:29:20.127467428 +0000 UTC m=+0.116192614 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 10 12:29:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:29:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:22 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:29:22 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:23 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:24 np0005580781 podman[254433]: 2026-01-10 17:29:24.266996595 +0000 UTC m=+0.056982135 container create 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 10 12:29:24 np0005580781 systemd[1]: Started libpod-conmon-479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de.scope.
Jan 10 12:29:24 np0005580781 podman[254433]: 2026-01-10 17:29:24.244601089 +0000 UTC m=+0.034586679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:24 np0005580781 podman[254433]: 2026-01-10 17:29:24.36595608 +0000 UTC m=+0.155941730 container init 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 12:29:24 np0005580781 podman[254433]: 2026-01-10 17:29:24.379404018 +0000 UTC m=+0.169389588 container start 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:29:24 np0005580781 podman[254433]: 2026-01-10 17:29:24.383786335 +0000 UTC m=+0.173771875 container attach 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 10 12:29:24 np0005580781 serene_mclaren[254449]: 167 167
Jan 10 12:29:24 np0005580781 systemd[1]: libpod-479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de.scope: Deactivated successfully.
Jan 10 12:29:24 np0005580781 podman[254433]: 2026-01-10 17:29:24.386962497 +0000 UTC m=+0.176948077 container died 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 12:29:24 np0005580781 systemd[1]: var-lib-containers-storage-overlay-d52b901919fc985461a58d414f590342ab6faab5b7b1c5d5c9751975f6b443df-merged.mount: Deactivated successfully.
Jan 10 12:29:24 np0005580781 podman[254433]: 2026-01-10 17:29:24.442509069 +0000 UTC m=+0.232494619 container remove 479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:29:24 np0005580781 systemd[1]: libpod-conmon-479a1db590a2db9239ee0d982550c1ac9fdcd1bbedfb4ad5e341b751f4df31de.scope: Deactivated successfully.
Jan 10 12:29:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:24 np0005580781 podman[254473]: 2026-01-10 17:29:24.712420247 +0000 UTC m=+0.069694872 container create 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:29:24 np0005580781 systemd[1]: Started libpod-conmon-9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f.scope.
Jan 10 12:29:24 np0005580781 podman[254473]: 2026-01-10 17:29:24.68723038 +0000 UTC m=+0.044504985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:24 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:24 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:24 np0005580781 podman[254473]: 2026-01-10 17:29:24.829691841 +0000 UTC m=+0.186966496 container init 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 12:29:24 np0005580781 podman[254473]: 2026-01-10 17:29:24.842377607 +0000 UTC m=+0.199652232 container start 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:29:24 np0005580781 podman[254473]: 2026-01-10 17:29:24.847635178 +0000 UTC m=+0.204909853 container attach 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]: [
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:    {
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "available": false,
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "being_replaced": false,
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "ceph_device_lvm": false,
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "lsm_data": {},
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "lvs": [],
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "path": "/dev/sr0",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "rejected_reasons": [
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "Has a FileSystem",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "Insufficient space (<5GB)"
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        ],
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        "sys_api": {
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "actuators": null,
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "device_nodes": [
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:                "sr0"
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            ],
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "devname": "sr0",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "human_readable_size": "482.00 KB",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "id_bus": "ata",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "model": "QEMU DVD-ROM",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "nr_requests": "2",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "parent": "/dev/sr0",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "partitions": {},
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "path": "/dev/sr0",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "removable": "1",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "rev": "2.5+",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "ro": "0",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "rotational": "1",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "sas_address": "",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "sas_device_handle": "",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "scheduler_mode": "mq-deadline",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "sectors": 0,
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "sectorsize": "2048",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "size": 493568.0,
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "support_discard": "2048",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "type": "disk",
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:            "vendor": "QEMU"
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:        }
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]:    }
Jan 10 12:29:25 np0005580781 nice_bardeen[254490]: ]
Jan 10 12:29:25 np0005580781 systemd[1]: libpod-9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f.scope: Deactivated successfully.
Jan 10 12:29:25 np0005580781 podman[254473]: 2026-01-10 17:29:25.564655027 +0000 UTC m=+0.921929642 container died 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:29:25 np0005580781 systemd[1]: var-lib-containers-storage-overlay-6401f9bda8c093e4f2c9a038ea2ccc8332898f3920a6de9b4ca9df695c626723-merged.mount: Deactivated successfully.
Jan 10 12:29:25 np0005580781 podman[254473]: 2026-01-10 17:29:25.615631348 +0000 UTC m=+0.972905933 container remove 9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 10 12:29:25 np0005580781 systemd[1]: libpod-conmon-9bd4f9adf787ad651e7fc483d2c476f85267e9f0e4933b17efd9be91c5d6e36f.scope: Deactivated successfully.
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:25 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:29:26 np0005580781 podman[255290]: 2026-01-10 17:29:26.273112488 +0000 UTC m=+0.060527537 container create fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 10 12:29:26 np0005580781 systemd[1]: Started libpod-conmon-fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0.scope.
Jan 10 12:29:26 np0005580781 podman[255290]: 2026-01-10 17:29:26.245169712 +0000 UTC m=+0.032584801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:26 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:26 np0005580781 podman[255290]: 2026-01-10 17:29:26.36953794 +0000 UTC m=+0.156952979 container init fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 10 12:29:26 np0005580781 podman[255290]: 2026-01-10 17:29:26.378840899 +0000 UTC m=+0.166255938 container start fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 12:29:26 np0005580781 friendly_turing[255306]: 167 167
Jan 10 12:29:26 np0005580781 podman[255290]: 2026-01-10 17:29:26.383460762 +0000 UTC m=+0.170875811 container attach fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 12:29:26 np0005580781 systemd[1]: libpod-fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0.scope: Deactivated successfully.
Jan 10 12:29:26 np0005580781 podman[255290]: 2026-01-10 17:29:26.387037345 +0000 UTC m=+0.174452364 container died fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:29:26 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4e99729f21db00ae1fb7a109e51146725de708d0b4dda0e967544c4918e05063-merged.mount: Deactivated successfully.
Jan 10 12:29:26 np0005580781 podman[255290]: 2026-01-10 17:29:26.425814754 +0000 UTC m=+0.213229803 container remove fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_turing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 10 12:29:26 np0005580781 systemd[1]: libpod-conmon-fc5933be3a4197325413bc9a7155b2798a7d4f3c690897876e27cf61930398f0.scope: Deactivated successfully.
Jan 10 12:29:26 np0005580781 podman[255331]: 2026-01-10 17:29:26.684281432 +0000 UTC m=+0.065523342 container create ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:29:26 np0005580781 systemd[1]: Started libpod-conmon-ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe.scope.
Jan 10 12:29:26 np0005580781 podman[255331]: 2026-01-10 17:29:26.652948608 +0000 UTC m=+0.034190608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:26 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:26 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:26 np0005580781 podman[255331]: 2026-01-10 17:29:26.792338559 +0000 UTC m=+0.173580469 container init ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:29:26 np0005580781 podman[255331]: 2026-01-10 17:29:26.817760892 +0000 UTC m=+0.199002832 container start ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:29:26 np0005580781 podman[255331]: 2026-01-10 17:29:26.822565551 +0000 UTC m=+0.203807571 container attach ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:29:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:27 np0005580781 suspicious_johnson[255347]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:29:27 np0005580781 suspicious_johnson[255347]: --> All data devices are unavailable
Jan 10 12:29:27 np0005580781 systemd[1]: libpod-ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe.scope: Deactivated successfully.
Jan 10 12:29:27 np0005580781 podman[255331]: 2026-01-10 17:29:27.425654562 +0000 UTC m=+0.806896472 container died ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 10 12:29:27 np0005580781 systemd[1]: var-lib-containers-storage-overlay-effeee03aee15520a1533186ba6f18793642097aaaa184e8ed2dec89c646a0cd-merged.mount: Deactivated successfully.
Jan 10 12:29:27 np0005580781 podman[255331]: 2026-01-10 17:29:27.477263981 +0000 UTC m=+0.858505891 container remove ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:29:27 np0005580781 systemd[1]: libpod-conmon-ef926725b223cfd5dc3810244a82c3101dbbe263c3a807d1f5d0925c774ac7fe.scope: Deactivated successfully.
Jan 10 12:29:28 np0005580781 podman[255443]: 2026-01-10 17:29:28.148821107 +0000 UTC m=+0.076348623 container create 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 12:29:28 np0005580781 systemd[1]: Started libpod-conmon-34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d.scope.
Jan 10 12:29:28 np0005580781 podman[255443]: 2026-01-10 17:29:28.120223372 +0000 UTC m=+0.047750938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:28 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:28 np0005580781 podman[255443]: 2026-01-10 17:29:28.264160045 +0000 UTC m=+0.191687601 container init 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 10 12:29:28 np0005580781 podman[255443]: 2026-01-10 17:29:28.277391297 +0000 UTC m=+0.204918813 container start 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 12:29:28 np0005580781 podman[255443]: 2026-01-10 17:29:28.28198786 +0000 UTC m=+0.209515426 container attach 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 10 12:29:28 np0005580781 interesting_gagarin[255460]: 167 167
Jan 10 12:29:28 np0005580781 systemd[1]: libpod-34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d.scope: Deactivated successfully.
Jan 10 12:29:28 np0005580781 podman[255443]: 2026-01-10 17:29:28.290567887 +0000 UTC m=+0.218095373 container died 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 10 12:29:28 np0005580781 systemd[1]: var-lib-containers-storage-overlay-adf626c4002afd6bb040000fba66e5c2fa3fa115d2dff1f83d72f321df672214-merged.mount: Deactivated successfully.
Jan 10 12:29:28 np0005580781 podman[255443]: 2026-01-10 17:29:28.338415638 +0000 UTC m=+0.265943124 container remove 34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_gagarin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:29:28 np0005580781 systemd[1]: libpod-conmon-34415630525d4e133f4dba57e8a319d2a59089d808228953fd919f0786381f5d.scope: Deactivated successfully.
Jan 10 12:29:28 np0005580781 podman[255484]: 2026-01-10 17:29:28.589918065 +0000 UTC m=+0.077391544 container create a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 10 12:29:28 np0005580781 systemd[1]: Started libpod-conmon-a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e.scope.
Jan 10 12:29:28 np0005580781 podman[255484]: 2026-01-10 17:29:28.553976538 +0000 UTC m=+0.041450077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:28 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:28 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:28 np0005580781 podman[255484]: 2026-01-10 17:29:28.705360755 +0000 UTC m=+0.192834294 container init a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:29:28 np0005580781 podman[255484]: 2026-01-10 17:29:28.723819148 +0000 UTC m=+0.211292637 container start a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 10 12:29:28 np0005580781 podman[255484]: 2026-01-10 17:29:28.729881463 +0000 UTC m=+0.217354992 container attach a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 12:29:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]: {
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:    "0": [
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:        {
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "devices": [
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "/dev/loop3"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            ],
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_name": "ceph_lv0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_size": "21470642176",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "name": "ceph_lv0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "tags": {
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cluster_name": "ceph",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.crush_device_class": "",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.encrypted": "0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.objectstore": "bluestore",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osd_id": "0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.type": "block",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.vdo": "0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.with_tpm": "0"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            },
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "type": "block",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "vg_name": "ceph_vg0"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:        }
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:    ],
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:    "1": [
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:        {
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "devices": [
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "/dev/loop4"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            ],
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_name": "ceph_lv1",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_size": "21470642176",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "name": "ceph_lv1",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "tags": {
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cluster_name": "ceph",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.crush_device_class": "",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.encrypted": "0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.objectstore": "bluestore",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osd_id": "1",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.type": "block",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.vdo": "0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.with_tpm": "0"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            },
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "type": "block",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "vg_name": "ceph_vg1"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:        }
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:    ],
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:    "2": [
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:        {
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "devices": [
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "/dev/loop5"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            ],
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_name": "ceph_lv2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_size": "21470642176",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "name": "ceph_lv2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "tags": {
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.cluster_name": "ceph",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.crush_device_class": "",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.encrypted": "0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.objectstore": "bluestore",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osd_id": "2",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.type": "block",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.vdo": "0",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:                "ceph.with_tpm": "0"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            },
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "type": "block",
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:            "vg_name": "ceph_vg2"
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:        }
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]:    ]
Jan 10 12:29:29 np0005580781 vigilant_hoover[255501]: }
Jan 10 12:29:29 np0005580781 systemd[1]: libpod-a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e.scope: Deactivated successfully.
Jan 10 12:29:29 np0005580781 podman[255510]: 2026-01-10 17:29:29.186098506 +0000 UTC m=+0.048055037 container died a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:29:29 np0005580781 systemd[1]: var-lib-containers-storage-overlay-f644dc05436f2285b3a467f8ee27b73c94f64bbed4f2705b64887203217813cc-merged.mount: Deactivated successfully.
Jan 10 12:29:29 np0005580781 podman[255510]: 2026-01-10 17:29:29.242046391 +0000 UTC m=+0.104002892 container remove a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:29:29 np0005580781 systemd[1]: libpod-conmon-a13291a43c24ecd8f14c49ea50bf0ccc282824f58d0580d9e5ccc939b4b3b46e.scope: Deactivated successfully.
Jan 10 12:29:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:29 np0005580781 podman[255587]: 2026-01-10 17:29:29.877963209 +0000 UTC m=+0.058540840 container create 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 10 12:29:29 np0005580781 systemd[1]: Started libpod-conmon-659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81.scope.
Jan 10 12:29:29 np0005580781 podman[255587]: 2026-01-10 17:29:29.85618076 +0000 UTC m=+0.036758371 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:29 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:29 np0005580781 podman[255587]: 2026-01-10 17:29:29.970211101 +0000 UTC m=+0.150788792 container init 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 10 12:29:29 np0005580781 podman[255587]: 2026-01-10 17:29:29.976809261 +0000 UTC m=+0.157386882 container start 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:29:29 np0005580781 podman[255587]: 2026-01-10 17:29:29.981138186 +0000 UTC m=+0.161715817 container attach 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:29:29 np0005580781 nostalgic_darwin[255604]: 167 167
Jan 10 12:29:29 np0005580781 systemd[1]: libpod-659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81.scope: Deactivated successfully.
Jan 10 12:29:29 np0005580781 podman[255587]: 2026-01-10 17:29:29.98508578 +0000 UTC m=+0.165663411 container died 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:29:30 np0005580781 systemd[1]: var-lib-containers-storage-overlay-70d9c62277471f51618b4bad7b43ec77f7318f050c312a3e5f345622a6f4d102-merged.mount: Deactivated successfully.
Jan 10 12:29:30 np0005580781 podman[255587]: 2026-01-10 17:29:30.034175646 +0000 UTC m=+0.214753237 container remove 659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 10 12:29:30 np0005580781 systemd[1]: libpod-conmon-659c318c3c76b1defccb8eef788a9a4c28c72d20414b4af7d95215be56ec3b81.scope: Deactivated successfully.
Jan 10 12:29:30 np0005580781 podman[255628]: 2026-01-10 17:29:30.273480051 +0000 UTC m=+0.079437713 container create 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 12:29:30 np0005580781 podman[255628]: 2026-01-10 17:29:30.239331466 +0000 UTC m=+0.045289208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:29:30 np0005580781 systemd[1]: Started libpod-conmon-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope.
Jan 10 12:29:30 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:29:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:30 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:29:30 np0005580781 podman[255628]: 2026-01-10 17:29:30.387219102 +0000 UTC m=+0.193176794 container init 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 10 12:29:30 np0005580781 podman[255628]: 2026-01-10 17:29:30.396217331 +0000 UTC m=+0.202174993 container start 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:29:30 np0005580781 podman[255628]: 2026-01-10 17:29:30.401404141 +0000 UTC m=+0.207361853 container attach 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 12:29:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:31 np0005580781 lvm[255721]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:29:31 np0005580781 lvm[255721]: VG ceph_vg0 finished
Jan 10 12:29:31 np0005580781 lvm[255725]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:29:31 np0005580781 lvm[255725]: VG ceph_vg2 finished
Jan 10 12:29:31 np0005580781 lvm[255724]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:29:31 np0005580781 lvm[255724]: VG ceph_vg1 finished
Jan 10 12:29:31 np0005580781 elated_elion[255644]: {}
Jan 10 12:29:31 np0005580781 systemd[1]: libpod-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope: Deactivated successfully.
Jan 10 12:29:31 np0005580781 systemd[1]: libpod-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope: Consumed 1.695s CPU time.
Jan 10 12:29:31 np0005580781 podman[255729]: 2026-01-10 17:29:31.459950353 +0000 UTC m=+0.043392553 container died 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:29:31 np0005580781 systemd[1]: var-lib-containers-storage-overlay-7a307395d5836a2540ddabdfa2efe103e593b9f179b4e91a7d26e275eccb33bc-merged.mount: Deactivated successfully.
Jan 10 12:29:31 np0005580781 podman[255729]: 2026-01-10 17:29:31.5173606 +0000 UTC m=+0.100802780 container remove 6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elion, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 12:29:31 np0005580781 systemd[1]: libpod-conmon-6c6fd14fc8d6c2806e3080fc05228c6ab68cd1f40a98a16fe9245d3fc63b2369.scope: Deactivated successfully.
Jan 10 12:29:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:29:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:31 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:29:31 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:31 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:29:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:29:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/677186106' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:29:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:29:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/677186106' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:29:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:29:38
Jan 10 12:29:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:29:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:29:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Jan 10 12:29:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:29:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:29:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:29:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:29:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:29:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:29:48.948 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:29:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:29:48.950 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:29:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:29:48.951 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:29:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:51 np0005580781 podman[255768]: 2026-01-10 17:29:51.099843866 +0000 UTC m=+0.081341351 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 10 12:29:51 np0005580781 podman[255769]: 2026-01-10 17:29:51.156016611 +0000 UTC m=+0.138214625 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 12:29:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:29:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:57 np0005580781 nova_compute[237049]: 2026-01-10 17:29:57.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:57 np0005580781 nova_compute[237049]: 2026-01-10 17:29:57.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:29:57 np0005580781 nova_compute[237049]: 2026-01-10 17:29:57.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:29:57 np0005580781 nova_compute[237049]: 2026-01-10 17:29:57.367 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:29:58 np0005580781 nova_compute[237049]: 2026-01-10 17:29:58.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:29:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1075: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:29:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1076: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:01 np0005580781 nova_compute[237049]: 2026-01-10 17:30:01.335 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.377 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.378 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.378 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:30:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1077: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:30:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491358180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:30:02 np0005580781 nova_compute[237049]: 2026-01-10 17:30:02.950 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.159 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.160 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.160 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.161 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.229 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.230 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.247 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:30:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:30:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3342619412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.778 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.785 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.805 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.808 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:30:03 np0005580781 nova_compute[237049]: 2026-01-10 17:30:03.809 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:30:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:04 np0005580781 nova_compute[237049]: 2026-01-10 17:30:04.800 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:04 np0005580781 nova_compute[237049]: 2026-01-10 17:30:04.801 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:04 np0005580781 nova_compute[237049]: 2026-01-10 17:30:04.802 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:04 np0005580781 nova_compute[237049]: 2026-01-10 17:30:04.802 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:04 np0005580781 nova_compute[237049]: 2026-01-10 17:30:04.803 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:30:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1078: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1079: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:07 np0005580781 nova_compute[237049]: 2026-01-10 17:30:07.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1080: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:30:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:30:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:30:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:30:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:30:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:30:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1081: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1082: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1083: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1084: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1085: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1086: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:22 np0005580781 podman[255856]: 2026-01-10 17:30:22.063840593 +0000 UTC m=+0.058306675 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Jan 10 12:30:22 np0005580781 podman[255857]: 2026-01-10 17:30:22.12759902 +0000 UTC m=+0.112285018 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 10 12:30:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1087: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1088: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1089: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1090: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1091: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:30:32 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:30:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1092: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:30:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:30:33 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:30:33 np0005580781 podman[256046]: 2026-01-10 17:30:33.279665114 +0000 UTC m=+0.076861695 container create 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:30:33 np0005580781 systemd[1]: Started libpod-conmon-91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637.scope.
Jan 10 12:30:33 np0005580781 podman[256046]: 2026-01-10 17:30:33.249843918 +0000 UTC m=+0.047040579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:30:33 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:30:33 np0005580781 podman[256046]: 2026-01-10 17:30:33.399489423 +0000 UTC m=+0.196686064 container init 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:30:33 np0005580781 podman[256046]: 2026-01-10 17:30:33.407417685 +0000 UTC m=+0.204614236 container start 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 10 12:30:33 np0005580781 podman[256046]: 2026-01-10 17:30:33.411788898 +0000 UTC m=+0.208985449 container attach 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:33 np0005580781 nifty_euler[256062]: 167 167
Jan 10 12:30:33 np0005580781 systemd[1]: libpod-91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637.scope: Deactivated successfully.
Jan 10 12:30:33 np0005580781 podman[256046]: 2026-01-10 17:30:33.416111649 +0000 UTC m=+0.213308220 container died 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:33 np0005580781 systemd[1]: var-lib-containers-storage-overlay-a8dab84eb6676a3e5fba7a8b0138d188f578ea8594071399297e2477325b7cfa-merged.mount: Deactivated successfully.
Jan 10 12:30:33 np0005580781 podman[256046]: 2026-01-10 17:30:33.473275071 +0000 UTC m=+0.270471662 container remove 91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:33 np0005580781 systemd[1]: libpod-conmon-91becd772a35b1a37615361e5b924b303712e541099e82a5d0a212e179dea637.scope: Deactivated successfully.
Jan 10 12:30:33 np0005580781 podman[256086]: 2026-01-10 17:30:33.634304715 +0000 UTC m=+0.041355341 container create 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 10 12:30:33 np0005580781 systemd[1]: Started libpod-conmon-17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168.scope.
Jan 10 12:30:33 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:30:33 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:33 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:33 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:33 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:33 np0005580781 podman[256086]: 2026-01-10 17:30:33.616809994 +0000 UTC m=+0.023860630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:30:33 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:33 np0005580781 podman[256086]: 2026-01-10 17:30:33.724618506 +0000 UTC m=+0.131669202 container init 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 12:30:33 np0005580781 podman[256086]: 2026-01-10 17:30:33.733562837 +0000 UTC m=+0.140613503 container start 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:30:33 np0005580781 podman[256086]: 2026-01-10 17:30:33.737626351 +0000 UTC m=+0.144677017 container attach 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 10 12:30:34 np0005580781 happy_wescoff[256103]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:30:34 np0005580781 happy_wescoff[256103]: --> All data devices are unavailable
Jan 10 12:30:34 np0005580781 systemd[1]: libpod-17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168.scope: Deactivated successfully.
Jan 10 12:30:34 np0005580781 podman[256086]: 2026-01-10 17:30:34.346360943 +0000 UTC m=+0.753411599 container died 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:30:34 np0005580781 systemd[1]: var-lib-containers-storage-overlay-d9beacda0ae2be6c3c7ab537bd7688c8d05c643e83a626c1768f8e0eab23bf89-merged.mount: Deactivated successfully.
Jan 10 12:30:34 np0005580781 podman[256086]: 2026-01-10 17:30:34.411661183 +0000 UTC m=+0.818711829 container remove 17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wescoff, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 10 12:30:34 np0005580781 systemd[1]: libpod-conmon-17595cd5e775b44f47a7fc54349e9c3dfd880fb61c41397dd5f158b6fa073168.scope: Deactivated successfully.
Jan 10 12:30:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1093: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:35 np0005580781 podman[256197]: 2026-01-10 17:30:35.033188354 +0000 UTC m=+0.054953122 container create 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:35 np0005580781 systemd[1]: Started libpod-conmon-61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575.scope.
Jan 10 12:30:35 np0005580781 podman[256197]: 2026-01-10 17:30:35.009601052 +0000 UTC m=+0.031365850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:30:35 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:30:35 np0005580781 podman[256197]: 2026-01-10 17:30:35.126421137 +0000 UTC m=+0.148185935 container init 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 10 12:30:35 np0005580781 podman[256197]: 2026-01-10 17:30:35.132765425 +0000 UTC m=+0.154530183 container start 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:30:35 np0005580781 podman[256197]: 2026-01-10 17:30:35.135959254 +0000 UTC m=+0.157724012 container attach 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 12:30:35 np0005580781 fervent_mendel[256213]: 167 167
Jan 10 12:30:35 np0005580781 systemd[1]: libpod-61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575.scope: Deactivated successfully.
Jan 10 12:30:35 np0005580781 podman[256197]: 2026-01-10 17:30:35.1386738 +0000 UTC m=+0.160438558 container died 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 10 12:30:35 np0005580781 systemd[1]: var-lib-containers-storage-overlay-4b438eb605f37ecfaa215689dbdcb33f26460b0757be62e417fded8982c29041-merged.mount: Deactivated successfully.
Jan 10 12:30:35 np0005580781 podman[256197]: 2026-01-10 17:30:35.209870306 +0000 UTC m=+0.231635064 container remove 61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 10 12:30:35 np0005580781 systemd[1]: libpod-conmon-61b111d13079ac56d3b804d80741bfa60dd570785cdda8b229596958bf3de575.scope: Deactivated successfully.
Jan 10 12:30:35 np0005580781 podman[256239]: 2026-01-10 17:30:35.40838316 +0000 UTC m=+0.061961348 container create efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 10 12:30:35 np0005580781 systemd[1]: Started libpod-conmon-efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa.scope.
Jan 10 12:30:35 np0005580781 podman[256239]: 2026-01-10 17:30:35.382208807 +0000 UTC m=+0.035787025 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:30:35 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:30:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:35 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:35 np0005580781 podman[256239]: 2026-01-10 17:30:35.522444847 +0000 UTC m=+0.176023065 container init efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:30:35 np0005580781 podman[256239]: 2026-01-10 17:30:35.536303106 +0000 UTC m=+0.189881324 container start efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:30:35 np0005580781 podman[256239]: 2026-01-10 17:30:35.541585094 +0000 UTC m=+0.195163302 container attach efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]: {
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:    "0": [
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:        {
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "devices": [
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "/dev/loop3"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            ],
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_name": "ceph_lv0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_size": "21470642176",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "name": "ceph_lv0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "tags": {
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cluster_name": "ceph",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.crush_device_class": "",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.encrypted": "0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.objectstore": "bluestore",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osd_id": "0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.type": "block",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.vdo": "0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.with_tpm": "0"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            },
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "type": "block",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "vg_name": "ceph_vg0"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:        }
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:    ],
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:    "1": [
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:        {
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "devices": [
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "/dev/loop4"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            ],
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_name": "ceph_lv1",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_size": "21470642176",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "name": "ceph_lv1",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "tags": {
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cluster_name": "ceph",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.crush_device_class": "",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.encrypted": "0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.objectstore": "bluestore",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osd_id": "1",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.type": "block",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.vdo": "0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.with_tpm": "0"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            },
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "type": "block",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "vg_name": "ceph_vg1"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:        }
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:    ],
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:    "2": [
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:        {
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "devices": [
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "/dev/loop5"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            ],
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_name": "ceph_lv2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_size": "21470642176",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "name": "ceph_lv2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "tags": {
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.cluster_name": "ceph",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.crush_device_class": "",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.encrypted": "0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.objectstore": "bluestore",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osd_id": "2",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.type": "block",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.vdo": "0",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:                "ceph.with_tpm": "0"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            },
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "type": "block",
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:            "vg_name": "ceph_vg2"
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:        }
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]:    ]
Jan 10 12:30:35 np0005580781 affectionate_chatelet[256256]: }
Jan 10 12:30:35 np0005580781 systemd[1]: libpod-efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa.scope: Deactivated successfully.
Jan 10 12:30:35 np0005580781 podman[256239]: 2026-01-10 17:30:35.889619129 +0000 UTC m=+0.543197357 container died efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 10 12:30:35 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c7ee666a237f0309139fd7b365255fa75609dd57c4cf536884c5a99d69757baa-merged.mount: Deactivated successfully.
Jan 10 12:30:35 np0005580781 podman[256239]: 2026-01-10 17:30:35.932014027 +0000 UTC m=+0.585592235 container remove efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:35 np0005580781 systemd[1]: libpod-conmon-efea25f14474f565c50919f7a08997f62adf2b0900683b69f4b0b750d671f2aa.scope: Deactivated successfully.
Jan 10 12:30:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:30:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3857446414' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:30:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:30:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3857446414' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:30:36 np0005580781 podman[256339]: 2026-01-10 17:30:36.449950515 +0000 UTC m=+0.060979840 container create 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 10 12:30:36 np0005580781 systemd[1]: Started libpod-conmon-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope.
Jan 10 12:30:36 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:30:36 np0005580781 podman[256339]: 2026-01-10 17:30:36.42516187 +0000 UTC m=+0.036191255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:30:36 np0005580781 podman[256339]: 2026-01-10 17:30:36.53434004 +0000 UTC m=+0.145369355 container init 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 10 12:30:36 np0005580781 podman[256339]: 2026-01-10 17:30:36.540084921 +0000 UTC m=+0.151114216 container start 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 10 12:30:36 np0005580781 podman[256339]: 2026-01-10 17:30:36.5436187 +0000 UTC m=+0.154647995 container attach 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:30:36 np0005580781 zen_babbage[256355]: 167 167
Jan 10 12:30:36 np0005580781 systemd[1]: libpod-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope: Deactivated successfully.
Jan 10 12:30:36 np0005580781 conmon[256355]: conmon 1a3d2aa9cfa73c38328d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope/container/memory.events
Jan 10 12:30:36 np0005580781 podman[256339]: 2026-01-10 17:30:36.548667002 +0000 UTC m=+0.159696317 container died 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:36 np0005580781 systemd[1]: var-lib-containers-storage-overlay-c85a9df13187350def4e010090d1b0859318f6f77b92cf4eddf3967508213551-merged.mount: Deactivated successfully.
Jan 10 12:30:36 np0005580781 podman[256339]: 2026-01-10 17:30:36.590993148 +0000 UTC m=+0.202022443 container remove 1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_babbage, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:30:36 np0005580781 systemd[1]: libpod-conmon-1a3d2aa9cfa73c38328dbfc8876bb53c684881f2b64896c02765394b4fcbf1f4.scope: Deactivated successfully.
Jan 10 12:30:36 np0005580781 podman[256378]: 2026-01-10 17:30:36.796855788 +0000 UTC m=+0.056522095 container create 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:30:36 np0005580781 podman[256378]: 2026-01-10 17:30:36.772627829 +0000 UTC m=+0.032294146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:30:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1094: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:36 np0005580781 systemd[1]: Started libpod-conmon-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope.
Jan 10 12:30:36 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:30:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:36 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:30:36 np0005580781 podman[256378]: 2026-01-10 17:30:36.936618476 +0000 UTC m=+0.196284843 container init 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:30:36 np0005580781 podman[256378]: 2026-01-10 17:30:36.946091981 +0000 UTC m=+0.205758268 container start 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 10 12:30:36 np0005580781 podman[256378]: 2026-01-10 17:30:36.950275179 +0000 UTC m=+0.209941546 container attach 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:30:37 np0005580781 lvm[256470]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:30:37 np0005580781 lvm[256470]: VG ceph_vg0 finished
Jan 10 12:30:37 np0005580781 lvm[256473]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:30:37 np0005580781 lvm[256473]: VG ceph_vg1 finished
Jan 10 12:30:37 np0005580781 lvm[256475]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:30:37 np0005580781 lvm[256475]: VG ceph_vg2 finished
Jan 10 12:30:37 np0005580781 affectionate_torvalds[256394]: {}
Jan 10 12:30:37 np0005580781 systemd[1]: libpod-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope: Deactivated successfully.
Jan 10 12:30:37 np0005580781 podman[256378]: 2026-01-10 17:30:37.885540594 +0000 UTC m=+1.145206951 container died 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 10 12:30:37 np0005580781 systemd[1]: libpod-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope: Consumed 1.483s CPU time.
Jan 10 12:30:37 np0005580781 systemd[1]: var-lib-containers-storage-overlay-47836a8f0b6d9a524b7d7640ccdc857fbd6da99b5c002f52665672c84a17fbd1-merged.mount: Deactivated successfully.
Jan 10 12:30:37 np0005580781 podman[256378]: 2026-01-10 17:30:37.935253437 +0000 UTC m=+1.194919694 container remove 204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:30:37 np0005580781 systemd[1]: libpod-conmon-204d2e52211c2fc7cd5275d0181616c953ac2f2e591b216d029372c0ce27da6a.scope: Deactivated successfully.
Jan 10 12:30:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:30:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:30:38 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:30:38 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:30:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:30:38
Jan 10 12:30:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:30:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:30:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images']
Jan 10 12:30:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:30:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:30:38 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:30:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1095: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:30:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:30:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1096: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1097: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:30:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:30:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1098: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1099: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1100: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:30:48.949 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:30:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:30:48.951 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:30:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:30:48.952 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:30:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1101: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1102: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:53 np0005580781 podman[256518]: 2026-01-10 17:30:53.119009797 +0000 UTC m=+0.098152149 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 10 12:30:53 np0005580781 podman[256519]: 2026-01-10 17:30:53.186113905 +0000 UTC m=+0.165015100 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 10 12:30:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:30:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1103: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1104: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1105: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:30:59 np0005580781 nova_compute[237049]: 2026-01-10 17:30:59.345 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:59 np0005580781 nova_compute[237049]: 2026-01-10 17:30:59.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:30:59 np0005580781 nova_compute[237049]: 2026-01-10 17:30:59.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:30:59 np0005580781 nova_compute[237049]: 2026-01-10 17:30:59.372 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:30:59 np0005580781 nova_compute[237049]: 2026-01-10 17:30:59.372 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:30:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:00 np0005580781 nova_compute[237049]: 2026-01-10 17:31:00.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1106: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:02 np0005580781 nova_compute[237049]: 2026-01-10 17:31:02.370 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:02 np0005580781 nova_compute[237049]: 2026-01-10 17:31:02.392 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:31:02 np0005580781 nova_compute[237049]: 2026-01-10 17:31:02.392 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:31:02 np0005580781 nova_compute[237049]: 2026-01-10 17:31:02.392 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:31:02 np0005580781 nova_compute[237049]: 2026-01-10 17:31:02.393 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:31:02 np0005580781 nova_compute[237049]: 2026-01-10 17:31:02.393 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:31:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1107: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:31:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476922189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.010 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.281 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.282 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.283 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.283 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.463 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.464 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.552 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing inventories for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.634 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating ProviderTree inventory for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.635 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Updating inventory in ProviderTree for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.657 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing aggregate associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.682 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Refreshing trait associations for resource provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_FDC,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 10 12:31:03 np0005580781 nova_compute[237049]: 2026-01-10 17:31:03.699 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:31:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:31:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260666177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:31:04 np0005580781 nova_compute[237049]: 2026-01-10 17:31:04.305 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:31:04 np0005580781 nova_compute[237049]: 2026-01-10 17:31:04.312 237053 DEBUG nova.compute.provider_tree [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed in ProviderTree for provider: 5f85855c-8a9b-43b5-ae49-f5846b9dcebe update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 10 12:31:04 np0005580781 nova_compute[237049]: 2026-01-10 17:31:04.327 237053 DEBUG nova.scheduler.client.report [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Inventory has not changed for provider 5f85855c-8a9b-43b5-ae49-f5846b9dcebe based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 10 12:31:04 np0005580781 nova_compute[237049]: 2026-01-10 17:31:04.329 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 10 12:31:04 np0005580781 nova_compute[237049]: 2026-01-10 17:31:04.329 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:31:04 np0005580781 nova_compute[237049]: 2026-01-10 17:31:04.330 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:04 np0005580781 nova_compute[237049]: 2026-01-10 17:31:04.330 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 10 12:31:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:04 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1108: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:05 np0005580781 nova_compute[237049]: 2026-01-10 17:31:05.317 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:05 np0005580781 nova_compute[237049]: 2026-01-10 17:31:05.317 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:05 np0005580781 nova_compute[237049]: 2026-01-10 17:31:05.318 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:05 np0005580781 nova_compute[237049]: 2026-01-10 17:31:05.318 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:05 np0005580781 nova_compute[237049]: 2026-01-10 17:31:05.318 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 10 12:31:05 np0005580781 nova_compute[237049]: 2026-01-10 17:31:05.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.082543) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266082621, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 2383581, "memory_usage": 2429832, "flush_reason": "Manual Compaction"}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266100876, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2300057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20680, "largest_seqno": 22727, "table_properties": {"data_size": 2290858, "index_size": 5757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18448, "raw_average_key_size": 19, "raw_value_size": 2272434, "raw_average_value_size": 2454, "num_data_blocks": 264, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768066041, "oldest_key_time": 1768066041, "file_creation_time": 1768066266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 18413 microseconds, and 9694 cpu microseconds.
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.100968) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2300057 bytes OK
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.100999) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.102873) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.102903) EVENT_LOG_v1 {"time_micros": 1768066266102894, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.102931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2375018, prev total WAL file size 2375018, number of live WAL files 2.
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.104247) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2246KB)], [50(5788KB)]
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266104375, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 8227567, "oldest_snapshot_seqno": -1}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4491 keys, 7012119 bytes, temperature: kUnknown
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266151009, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7012119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6978678, "index_size": 21107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 107499, "raw_average_key_size": 23, "raw_value_size": 6894788, "raw_average_value_size": 1535, "num_data_blocks": 896, "num_entries": 4491, "num_filter_entries": 4491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768064235, "oldest_key_time": 0, "file_creation_time": 1768066266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f71f9c2-f3c5-4fc3-bcd9-6ffe346ae9d4", "db_session_id": "VPFJD76VNV79HUMFHEYZ", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.151490) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7012119 bytes
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.153472) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.9 rd, 149.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 5.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5005, records dropped: 514 output_compression: NoCompression
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.153520) EVENT_LOG_v1 {"time_micros": 1768066266153493, "job": 26, "event": "compaction_finished", "compaction_time_micros": 46773, "compaction_time_cpu_micros": 23987, "output_level": 6, "num_output_files": 1, "total_output_size": 7012119, "num_input_records": 5005, "num_output_records": 4491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266154555, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768066266156663, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.104116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:31:06 np0005580781 ceph-mon[75249]: rocksdb: (Original Log Time 2026/01/10-17:31:06.156825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 10 12:31:06 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1109: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:07 np0005580781 nova_compute[237049]: 2026-01-10 17:31:07.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:08 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1110: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:31:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:31:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:31:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:31:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:31:09 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:31:09 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:10 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1111: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:12 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1112: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:14 np0005580781 nova_compute[237049]: 2026-01-10 17:31:14.347 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:31:14 np0005580781 nova_compute[237049]: 2026-01-10 17:31:14.348 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 10 12:31:14 np0005580781 nova_compute[237049]: 2026-01-10 17:31:14.372 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 10 12:31:14 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:14 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1113: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:16 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1114: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:18 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1115: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:19 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:20 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1116: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:22 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1117: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:24 np0005580781 podman[256606]: 2026-01-10 17:31:24.07689955 +0000 UTC m=+0.068058456 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 10 12:31:24 np0005580781 podman[256607]: 2026-01-10 17:31:24.11441918 +0000 UTC m=+0.107187551 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 10 12:31:24 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:24 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1118: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:26 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1119: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:28 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1120: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:29 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:30 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1121: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:32 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1122: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:34 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:34 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1123: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 10 12:31:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/964493788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 10 12:31:36 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 10 12:31:36 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/964493788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 10 12:31:36 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1124: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Optimize plan auto_2026-01-10_17:31:38
Jan 10 12:31:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 10 12:31:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] do_upmap
Jan 10 12:31:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', 'images', '.mgr', 'vms', 'cephfs.cephfs.data']
Jan 10 12:31:38 np0005580781 ceph-mgr[75538]: [balancer INFO root] prepared 0/10 upmap changes
Jan 10 12:31:38 np0005580781 podman[256748]: 2026-01-10 17:31:38.813582555 +0000 UTC m=+0.104531737 container exec 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 10 12:31:38 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1125: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:38 np0005580781 podman[256748]: 2026-01-10 17:31:38.927812703 +0000 UTC m=+0.218762045 container exec_died 69622407e4b336ab6e593d34ac16bfb19f7f8835a32ed22c7a89e50ee8c8d8e7 (image=quay.io/ceph/ceph:v20, name=ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] scanning for idle connections..
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [volumes INFO mgr_util] cleaning up connections: []
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:31:39 np0005580781 ceph-mgr[75538]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 10 12:31:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:31:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:39 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:31:39 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:31:40 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:31:40 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1126: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:41 np0005580781 podman[257057]: 2026-01-10 17:31:41.108989734 +0000 UTC m=+0.056655416 container create b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:31:41 np0005580781 systemd[1]: Started libpod-conmon-b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01.scope.
Jan 10 12:31:41 np0005580781 podman[257057]: 2026-01-10 17:31:41.084064027 +0000 UTC m=+0.031729819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:31:41 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:31:41 np0005580781 podman[257057]: 2026-01-10 17:31:41.217573814 +0000 UTC m=+0.165239606 container init b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 10 12:31:41 np0005580781 podman[257057]: 2026-01-10 17:31:41.228768367 +0000 UTC m=+0.176434099 container start b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:31:41 np0005580781 podman[257057]: 2026-01-10 17:31:41.232528382 +0000 UTC m=+0.180194114 container attach b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:31:41 np0005580781 hardcore_stonebraker[257073]: 167 167
Jan 10 12:31:41 np0005580781 systemd[1]: libpod-b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01.scope: Deactivated successfully.
Jan 10 12:31:41 np0005580781 podman[257057]: 2026-01-10 17:31:41.236997658 +0000 UTC m=+0.184663410 container died b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:31:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 10 12:31:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:41 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 10 12:31:41 np0005580781 systemd[1]: var-lib-containers-storage-overlay-185260400ca50c3e4961ce223636abf17b336c004f27b6ef38024f79890bdbf5-merged.mount: Deactivated successfully.
Jan 10 12:31:41 np0005580781 podman[257057]: 2026-01-10 17:31:41.289940819 +0000 UTC m=+0.237606541 container remove b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 10 12:31:41 np0005580781 systemd[1]: libpod-conmon-b699315245a68bfec7a5ea206e73aaebcc5fa368d189f861f1aa77d6df6efd01.scope: Deactivated successfully.
Jan 10 12:31:41 np0005580781 podman[257097]: 2026-01-10 17:31:41.506846191 +0000 UTC m=+0.059066194 container create 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:31:41 np0005580781 systemd[1]: Started libpod-conmon-1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833.scope.
Jan 10 12:31:41 np0005580781 podman[257097]: 2026-01-10 17:31:41.474522976 +0000 UTC m=+0.026742969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:31:41 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:31:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:41 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:41 np0005580781 podman[257097]: 2026-01-10 17:31:41.617892459 +0000 UTC m=+0.170112482 container init 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 10 12:31:41 np0005580781 podman[257097]: 2026-01-10 17:31:41.627717594 +0000 UTC m=+0.179937577 container start 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 10 12:31:41 np0005580781 podman[257097]: 2026-01-10 17:31:41.631995404 +0000 UTC m=+0.184215417 container attach 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 10 12:31:42 np0005580781 affectionate_chaplygin[257113]: --> passed data devices: 0 physical, 3 LVM
Jan 10 12:31:42 np0005580781 affectionate_chaplygin[257113]: --> All data devices are unavailable
Jan 10 12:31:42 np0005580781 systemd[1]: libpod-1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833.scope: Deactivated successfully.
Jan 10 12:31:42 np0005580781 podman[257097]: 2026-01-10 17:31:42.198677025 +0000 UTC m=+0.750897048 container died 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 10 12:31:42 np0005580781 systemd[1]: var-lib-containers-storage-overlay-7cfd32e8ea4107f23876ea802ca6ddd1771077a31a661ef436ed6213df8ea19e-merged.mount: Deactivated successfully.
Jan 10 12:31:42 np0005580781 podman[257097]: 2026-01-10 17:31:42.281133483 +0000 UTC m=+0.833353466 container remove 1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:31:42 np0005580781 systemd[1]: libpod-conmon-1b2d97d61a1cad1fda6e0e5b174a696eaadc8c8b0b4d7971d9dbe163f1e43833.scope: Deactivated successfully.
Jan 10 12:31:42 np0005580781 podman[257207]: 2026-01-10 17:31:42.844373868 +0000 UTC m=+0.066238475 container create 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 10 12:31:42 np0005580781 systemd[1]: Started libpod-conmon-3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a.scope.
Jan 10 12:31:42 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1127: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:42 np0005580781 podman[257207]: 2026-01-10 17:31:42.817063323 +0000 UTC m=+0.038928020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:31:42 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:31:42 np0005580781 podman[257207]: 2026-01-10 17:31:42.944451549 +0000 UTC m=+0.166316146 container init 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 10 12:31:42 np0005580781 podman[257207]: 2026-01-10 17:31:42.956106075 +0000 UTC m=+0.177970712 container start 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:31:42 np0005580781 podman[257207]: 2026-01-10 17:31:42.961296251 +0000 UTC m=+0.183160878 container attach 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 10 12:31:42 np0005580781 modest_chaum[257223]: 167 167
Jan 10 12:31:42 np0005580781 systemd[1]: libpod-3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a.scope: Deactivated successfully.
Jan 10 12:31:42 np0005580781 podman[257207]: 2026-01-10 17:31:42.964276584 +0000 UTC m=+0.186141191 container died 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:31:42 np0005580781 systemd[1]: var-lib-containers-storage-overlay-5392128390e656b8e6d89b5265eb911182d92622bae2ec34bc989cdfd07f599d-merged.mount: Deactivated successfully.
Jan 10 12:31:43 np0005580781 podman[257207]: 2026-01-10 17:31:43.008140892 +0000 UTC m=+0.230005489 container remove 3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 10 12:31:43 np0005580781 systemd[1]: libpod-conmon-3535445077ce33c428c31db07114b41bb6b464b6e01cbfbf9b9cc9db0ba3639a.scope: Deactivated successfully.
Jan 10 12:31:43 np0005580781 podman[257248]: 2026-01-10 17:31:43.232838611 +0000 UTC m=+0.063870709 container create 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 10 12:31:43 np0005580781 systemd[1]: Started libpod-conmon-1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573.scope.
Jan 10 12:31:43 np0005580781 podman[257248]: 2026-01-10 17:31:43.202619085 +0000 UTC m=+0.033651173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:31:43 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:31:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:43 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:43 np0005580781 podman[257248]: 2026-01-10 17:31:43.334843296 +0000 UTC m=+0.165875434 container init 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:31:43 np0005580781 podman[257248]: 2026-01-10 17:31:43.345165535 +0000 UTC m=+0.176197623 container start 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 10 12:31:43 np0005580781 podman[257248]: 2026-01-10 17:31:43.349233419 +0000 UTC m=+0.180265507 container attach 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 10 12:31:43 np0005580781 systemd-logind[798]: New session 55 of user zuul.
Jan 10 12:31:43 np0005580781 systemd[1]: Started Session 55 of User zuul.
Jan 10 12:31:43 np0005580781 funny_gates[257265]: {
Jan 10 12:31:43 np0005580781 funny_gates[257265]:    "0": [
Jan 10 12:31:43 np0005580781 funny_gates[257265]:        {
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "devices": [
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "/dev/loop3"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            ],
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_name": "ceph_lv0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_size": "21470642176",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9aa1dcc9-88f4-49c0-be40-744313964d3e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "name": "ceph_lv0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "tags": {
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.block_uuid": "fwUplW-oU1O-8Dve-qChj-B5BI-bwLw-IoasQ2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cluster_name": "ceph",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.crush_device_class": "",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.encrypted": "0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.objectstore": "bluestore",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osd_fsid": "9aa1dcc9-88f4-49c0-be40-744313964d3e",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osd_id": "0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.type": "block",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.vdo": "0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.with_tpm": "0"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            },
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "type": "block",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "vg_name": "ceph_vg0"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:        }
Jan 10 12:31:43 np0005580781 funny_gates[257265]:    ],
Jan 10 12:31:43 np0005580781 funny_gates[257265]:    "1": [
Jan 10 12:31:43 np0005580781 funny_gates[257265]:        {
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "devices": [
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "/dev/loop4"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            ],
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_name": "ceph_lv1",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_size": "21470642176",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e8e31518-65ae-476c-891c-e2fc550d0a1c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "name": "ceph_lv1",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "tags": {
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.block_uuid": "jl7ctE-m3eu-S0gd-1Nja-dfWZ-muwN-XTCvqE",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cluster_name": "ceph",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.crush_device_class": "",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.encrypted": "0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.objectstore": "bluestore",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osd_fsid": "e8e31518-65ae-476c-891c-e2fc550d0a1c",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osd_id": "1",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.type": "block",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.vdo": "0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.with_tpm": "0"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            },
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "type": "block",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "vg_name": "ceph_vg1"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:        }
Jan 10 12:31:43 np0005580781 funny_gates[257265]:    ],
Jan 10 12:31:43 np0005580781 funny_gates[257265]:    "2": [
Jan 10 12:31:43 np0005580781 funny_gates[257265]:        {
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "devices": [
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "/dev/loop5"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            ],
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_name": "ceph_lv2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_size": "21470642176",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=87473727-6468-4f68-8371-e0bf60edaa43,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "lv_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "name": "ceph_lv2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "tags": {
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.block_uuid": "47dGd5-2Fbi-Yx1g-772p-7hFB-JWTz-i0cvRL",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cephx_lockbox_secret": "",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cluster_fsid": "a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.cluster_name": "ceph",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.crush_device_class": "",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.encrypted": "0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.objectstore": "bluestore",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osd_fsid": "87473727-6468-4f68-8371-e0bf60edaa43",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osd_id": "2",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.type": "block",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.vdo": "0",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:                "ceph.with_tpm": "0"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            },
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "type": "block",
Jan 10 12:31:43 np0005580781 funny_gates[257265]:            "vg_name": "ceph_vg2"
Jan 10 12:31:43 np0005580781 funny_gates[257265]:        }
Jan 10 12:31:43 np0005580781 funny_gates[257265]:    ]
Jan 10 12:31:43 np0005580781 funny_gates[257265]: }
Jan 10 12:31:43 np0005580781 systemd[1]: libpod-1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573.scope: Deactivated successfully.
Jan 10 12:31:43 np0005580781 podman[257248]: 2026-01-10 17:31:43.740139281 +0000 UTC m=+0.571171339 container died 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 10 12:31:43 np0005580781 systemd[1]: var-lib-containers-storage-overlay-57ab77d16a5d9908af97fa2220c4f10309903b5e270f0203c9218d418b046e65-merged.mount: Deactivated successfully.
Jan 10 12:31:43 np0005580781 podman[257248]: 2026-01-10 17:31:43.788270458 +0000 UTC m=+0.619302556 container remove 1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 10 12:31:43 np0005580781 systemd[1]: libpod-conmon-1e1716e9aa1d57faf05af5fcf8ec186a8b262422cc3f0fb5f3b89c3897814573.scope: Deactivated successfully.
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] _maybe_adjust
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.365931724612428e-07 of space, bias 1.0, pg target 0.00016097795173837282 quantized to 32 (current 32)
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.1924810223865999e-07 of space, bias 1.0, pg target 3.5774430671597993e-05 quantized to 32 (current 32)
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668695260671586 of space, bias 1.0, pg target 0.2006085782014758 quantized to 32 (current 32)
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0462037643091811e-06 of space, bias 4.0, pg target 0.0012554445171710175 quantized to 16 (current 16)
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 10 12:31:44 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:44 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1128: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:45 np0005580781 podman[257434]: 2026-01-10 17:31:45.076469616 +0000 UTC m=+0.050334100 container create 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:31:45 np0005580781 systemd[1]: Started libpod-conmon-4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b.scope.
Jan 10 12:31:45 np0005580781 podman[257434]: 2026-01-10 17:31:45.05267295 +0000 UTC m=+0.026537454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:31:45 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:31:45 np0005580781 podman[257434]: 2026-01-10 17:31:45.176166496 +0000 UTC m=+0.150030990 container init 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 10 12:31:45 np0005580781 podman[257434]: 2026-01-10 17:31:45.188528782 +0000 UTC m=+0.162393246 container start 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 10 12:31:45 np0005580781 podman[257434]: 2026-01-10 17:31:45.192801702 +0000 UTC m=+0.166666166 container attach 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 10 12:31:45 np0005580781 focused_rhodes[257450]: 167 167
Jan 10 12:31:45 np0005580781 systemd[1]: libpod-4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b.scope: Deactivated successfully.
Jan 10 12:31:45 np0005580781 podman[257434]: 2026-01-10 17:31:45.197583436 +0000 UTC m=+0.171447900 container died 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 10 12:31:45 np0005580781 systemd[1]: var-lib-containers-storage-overlay-bb3eaadf188c7484c0122bf159fe6b0f478d7df46569ce8473d4de69ca26a2d4-merged.mount: Deactivated successfully.
Jan 10 12:31:45 np0005580781 podman[257434]: 2026-01-10 17:31:45.237955366 +0000 UTC m=+0.211819830 container remove 4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 10 12:31:45 np0005580781 systemd[1]: libpod-conmon-4fcdf1b7e461ad68ed4433d0abef992a31fb9b1b2b4cb660c8a36bf08f082e1b.scope: Deactivated successfully.
Jan 10 12:31:45 np0005580781 podman[257492]: 2026-01-10 17:31:45.419048345 +0000 UTC m=+0.050521596 container create e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 10 12:31:45 np0005580781 systemd[1]: Started libpod-conmon-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope.
Jan 10 12:31:45 np0005580781 systemd[1]: Started libcrun container.
Jan 10 12:31:45 np0005580781 podman[257492]: 2026-01-10 17:31:45.393902421 +0000 UTC m=+0.025375722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 10 12:31:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:45 np0005580781 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 10 12:31:45 np0005580781 podman[257492]: 2026-01-10 17:31:45.50534928 +0000 UTC m=+0.136822551 container init e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 10 12:31:45 np0005580781 podman[257492]: 2026-01-10 17:31:45.511944265 +0000 UTC m=+0.143417506 container start e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 10 12:31:45 np0005580781 podman[257492]: 2026-01-10 17:31:45.527533361 +0000 UTC m=+0.159006632 container attach e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 10 12:31:46 np0005580781 lvm[257660]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:31:46 np0005580781 lvm[257660]: VG ceph_vg0 finished
Jan 10 12:31:46 np0005580781 lvm[257663]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:31:46 np0005580781 lvm[257663]: VG ceph_vg2 finished
Jan 10 12:31:46 np0005580781 lvm[257661]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:31:46 np0005580781 lvm[257661]: VG ceph_vg1 finished
Jan 10 12:31:46 np0005580781 sharp_carson[257512]: {}
Jan 10 12:31:46 np0005580781 systemd[1]: libpod-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope: Deactivated successfully.
Jan 10 12:31:46 np0005580781 systemd[1]: libpod-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope: Consumed 1.416s CPU time.
Jan 10 12:31:46 np0005580781 podman[257492]: 2026-01-10 17:31:46.339153308 +0000 UTC m=+0.970626549 container died e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 10 12:31:46 np0005580781 systemd[1]: var-lib-containers-storage-overlay-ea49157059eb38ec875a50c970867bb151de63d46f2e962444b4290ad573b1ba-merged.mount: Deactivated successfully.
Jan 10 12:31:46 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:46 np0005580781 podman[257492]: 2026-01-10 17:31:46.391452302 +0000 UTC m=+1.022925553 container remove e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 10 12:31:46 np0005580781 systemd[1]: libpod-conmon-e1d7c8691300de99ee62000b45624a842b3a5efc5ce98bae4437f0983b8dec5e.scope: Deactivated successfully.
Jan 10 12:31:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 10 12:31:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:46 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 10 12:31:46 np0005580781 ceph-mon[75249]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:46 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1129: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:47 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15014 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:47 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:47 np0005580781 ceph-mon[75249]: from='mgr.14122 192.168.122.100:0/386119902' entity='mgr.compute-0.mkxlpr' 
Jan 10 12:31:47 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 10 12:31:47 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411466825' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 10 12:31:48 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1130: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:31:48.950 152671 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:31:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:31:48.953 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:31:48 np0005580781 ovn_metadata_agent[152665]: 2026-01-10 17:31:48.953 152671 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:31:49 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:50 np0005580781 ovs-vsctl[257791]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 10 12:31:50 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1131: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:51 np0005580781 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 10 12:31:51 np0005580781 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 10 12:31:51 np0005580781 virtqemud[236762]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 10 12:31:52 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: cache status {prefix=cache status} (starting...)
Jan 10 12:31:52 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: client ls {prefix=client ls} (starting...)
Jan 10 12:31:52 np0005580781 lvm[258109]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 10 12:31:52 np0005580781 lvm[258109]: VG ceph_vg1 finished
Jan 10 12:31:52 np0005580781 lvm[258136]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 10 12:31:52 np0005580781 lvm[258136]: VG ceph_vg2 finished
Jan 10 12:31:52 np0005580781 lvm[258142]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 10 12:31:52 np0005580781 lvm[258142]: VG ceph_vg0 finished
Jan 10 12:31:52 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15018 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:52 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1132: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:53 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: damage ls {prefix=damage ls} (starting...)
Jan 10 12:31:53 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump loads {prefix=dump loads} (starting...)
Jan 10 12:31:53 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15020 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:53 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 10 12:31:53 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 10 12:31:53 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 10 12:31:53 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 10 12:31:53 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15024 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:53 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 10 12:31:53 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773928204' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 10 12:31:54 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 10 12:31:54 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 10 12:31:54 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15026 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:54 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:31:54.330+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:31:54 np0005580781 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:31:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 10 12:31:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214932499' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 10 12:31:54 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: ops {prefix=ops} (starting...)
Jan 10 12:31:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 10 12:31:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104891342' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 10 12:31:54 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 10 12:31:54 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905887018' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 10 12:31:54 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1133: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:55 np0005580781 podman[258406]: 2026-01-10 17:31:55.096667742 +0000 UTC m=+0.081784230 container health_status 4a66317a3a3660e903224f5e823ead0a57597ae17c6ac34919f8a3aeea25124f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 10 12:31:55 np0005580781 podman[258418]: 2026-01-10 17:31:55.136209758 +0000 UTC m=+0.122174920 container health_status a2049b55215718ead6719d161f51721cb7ba61ad8a59a8a1174e05b17008fa6f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '8f67373ba0b0ce6dbf2ab00c55997742fe2c1754e7d52ad8ccc21d20cf27baaa-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db-5d5860b81bb4554533f63333f8924ad74680ebcd4cf21e9d645f2534f65ca0db'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 10 12:31:55 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: session ls {prefix=session ls} (starting...)
Jan 10 12:31:55 np0005580781 ceph-mds[93917]: mds.cephfs.compute-0.anmivh asok_command: status {prefix=status} (starting...)
Jan 10 12:31:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 12:31:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124089951' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 12:31:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 10 12:31:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300202661' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 10 12:31:55 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15038 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:55 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 12:31:55 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113301332' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 12:31:56 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15042 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 12:31:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643842098' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 12:31:56 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1134: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:56 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 10 12:31:56 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843629315' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 10 12:31:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 12:31:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279908591' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 12:31:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 10 12:31:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/489480267' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 10 12:31:57 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 10 12:31:57 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820373565' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 10 12:31:57 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15054 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:57 np0005580781 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 12:31:57 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:31:57.947+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 10 12:31:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 12:31:58 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242109946' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 12:31:58 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 10 12:31:58 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1639877083' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 10 12:31:58 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15060 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:58 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1135: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:31:59 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15064 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 10 12:31:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982846763' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x40351/0x9b000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 229376 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 155648 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386526 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x44267/0xa4000, compress 0x0/0x0/0x0, omap 0x763e, meta 0x1a289c2), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 61 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 196608 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 172032 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.029172897s of 11.149907112s, submitted: 15
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402333 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 212992 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe115000/0x0/0x4ffc00000, data 0x4af47/0xb3000, compress 0x0/0x0/0x0, omap 0x82f5, meta 0x1a27d0b), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 204800 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 180224 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407980 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fe110000/0x0/0x4ffc00000, data 0x4c55d/0xb6000, compress 0x0/0x0/0x0, omap 0x8580, meta 0x1a27a80), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 106496 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f(unlocked)] enter Initial
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001224 0 0.000000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=0 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000076 1 0.000192
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000776 0 0.000000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000444 1 0.001106
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000831 2 0.000183
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 67 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 98304 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012900 2 0.000203
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.014360 0 0.000000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.003906 3 0.000352
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000141 1 0.000221
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.137376 3 0.000191
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 pg_epoch: 68 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=67/68 n=1 ec=39/23 lis/c=67/48 les/c/f=68/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 81920 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416166 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 65536 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 57344 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10f000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.526871681s of 15.590026855s, submitted: 16
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418577 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 73728 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 8192 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 1040384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424802 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61898752 unmapped: 1007616 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 999424 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427213 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 991232 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 983040 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 974848 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.689327240s of 15.030009270s, submitted: 10
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429626 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 966656 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 958464 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 950272 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 942080 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 925696 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 917504 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432039 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 909312 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.899189949s of 10.917829514s, submitted: 4
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 892928 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436865 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62029824 unmapped: 876544 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 868352 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 860160 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 851968 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 441687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 843776 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 835584 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884464264s of 10.945921898s, submitted: 8
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 827392 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 819200 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 811008 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446511 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 802816 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 794624 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 778240 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 770048 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451337 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 761856 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 753664 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.035273552s of 12.054781914s, submitted: 8
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 745472 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 737280 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 729088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 720896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 456163 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 712704 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 679936 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 671744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458576 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 663552 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.947065353s of 12.031906128s, submitted: 6
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62251008 unmapped: 655360 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 696320 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 463402 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 688128 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 647168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 638976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 630784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 614400 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 465815 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.993368149s of 15.010603905s, submitted: 6
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 606208 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 598016 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470637 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 589824 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 581632 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 573440 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 565248 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473048 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 557056 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 540672 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.969283104s of 11.986205101s, submitted: 8
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 532480 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 516096 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 499712 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 491520 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 482692 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 483328 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 475136 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 485103 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 466944 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.836256981s of 13.859765053s, submitted: 8
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 458752 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 450560 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 487516 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 442368 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 434176 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 425984 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489927 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 401408 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 393216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.158172607s of 11.166720390s, submitted: 4
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494751 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 327680 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497162 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 319488 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 311296 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 245760 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 229376 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 57344 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 802816 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 761856 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 286720 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 278528 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 16.31 MB, 0.03 MB/s#012Interval WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc ms_handle_reset ms_handle_reset con 0x5621df718000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc handle_mgr_configure stats_period=5
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4222 writes, 19K keys, 4222 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4222 writes, 393 syncs, 10.74 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5621ddea9a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501984 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe111000/0x0/0x4ffc00000, data 0x4dc43/0xbb000, compress 0x0/0x0/0x0, omap 0x895b, meta 0x1a276a5), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1018.819763184s of 1018.845520020s, submitted: 10
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1335296 heap: 70705152 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe10c000/0x0/0x4ffc00000, data 0x4f210/0xbe000, compress 0x0/0x0/0x0, omap 0x8be6, meta 0x1a2741a), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 70 ms_handle_reset con 0x5621e1456c00 session 0x5621e19bee00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 10534912 heap: 75366400 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 583003 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 18604032 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 72 ms_handle_reset con 0x5621e1457000 session 0x5621e19db180
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 18563072 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 587971 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 18522112 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 72 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc3469/0xd3b000, compress 0x0/0x0/0x0, omap 0x9501, meta 0x1a26aff), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.282839775s of 10.690481186s, submitted: 34
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 heartbeat osd_stat(store_statfs(0x4fd48c000/0x0/0x4ffc00000, data 0xcc4919/0xd3e000, compress 0x0/0x0/0x0, omap 0x97d9, meta 0x1a26827), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 589687 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 18513920 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.774143219s of 35.782062531s, submitted: 13
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 18374656 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1457400 session 0x5621deefbdc0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 74 heartbeat osd_stat(store_statfs(0x4fd48d000/0x0/0x4ffc00000, data 0xcc493c/0xd3f000, compress 0x0/0x0/0x0, omap 0x9a85, meta 0x1a2657b), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03c00 session 0x5621e195a000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 18046976 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 74 ms_handle_reset con 0x5621e1c03400 session 0x5621e1462a80
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 75 ms_handle_reset con 0x5621e1457800 session 0x5621e1462fc0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604284 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 17104896 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1457400 session 0x5621e077b6c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1456c00 session 0x5621e0a22e00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02c00 session 0x5621e067ea80
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 76 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc78e6/0xd47000, compress 0x0/0x0/0x0, omap 0xa435, meta 0x1a25bcb), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 76 ms_handle_reset con 0x5621e1c02800 session 0x5621df500000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 17080320 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 17088512 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 77 ms_handle_reset con 0x5621e1456c00 session 0x5621e077b180
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 15884288 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 78 ms_handle_reset con 0x5621e1457400 session 0x5621e19da380
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 15704064 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610354 data_alloc: 218103808 data_used: 858
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 15663104 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 79 ms_handle_reset con 0x5621e1bcec00 session 0x5621e14636c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd47a000/0x0/0x4ffc00000, data 0xccb716/0xd4d000, compress 0x0/0x0/0x0, omap 0xa7da, meta 0x1a25826), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 15753216 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 81 ms_handle_reset con 0x5621e1c03400 session 0x5621e0a23a40
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 15540224 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 82 ms_handle_reset con 0x5621e1c02c00 session 0x5621df8eca80
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 15392768 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.066205978s of 10.445180893s, submitted: 195
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947dc0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 83 ms_handle_reset con 0x5621e1457000 session 0x5621df500380
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 15171584 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 84 ms_handle_reset con 0x5621e1457800 session 0x5621e0a22700
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645550 data_alloc: 218103808 data_used: 4919
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 15007744 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fd468000/0x0/0x4ffc00000, data 0xcd301e/0xd60000, compress 0x0/0x0/0x0, omap 0xcff3, meta 0x1a2300d), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 85 ms_handle_reset con 0x5621e1457400 session 0x5621e19be8c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 14778368 heap: 83763200 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c27/0xd6b000, compress 0x0/0x0/0x0, omap 0xdcb6, meta 0x1a2234a), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 13451264 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 86 ms_handle_reset con 0x5621e1456c00 session 0x5621e19da8c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 21553152 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 20324352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 87 ms_handle_reset con 0x5621e1457000 session 0x5621df500e00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664901 data_alloc: 218103808 data_used: 4919
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1456000 session 0x5621e19468c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 88 ms_handle_reset con 0x5621e1bcec00 session 0x5621e1947a40
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0xcdadfd/0xd71000, compress 0x0/0x0/0x0, omap 0xea36, meta 0x2bc15ca), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 18849792 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 89 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19801c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 19030016 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 90 ms_handle_reset con 0x5621e1bd8c00 session 0x5621e0a22a80
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 18989056 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 91 ms_handle_reset con 0x5621e1456000 session 0x5621e1981180
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.402823448s of 10.174050331s, submitted: 340
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 18792448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 92 ms_handle_reset con 0x5621e1457000 session 0x5621e1946e00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fc2b3000/0x0/0x4ffc00000, data 0xcdeaed/0xd79000, compress 0x0/0x0/0x0, omap 0x108a3, meta 0x2bbf75d), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675212 data_alloc: 218103808 data_used: 21160
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bcec00 session 0x5621e19bf880
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 18677760 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 93 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19db880
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677676 data_alloc: 218103808 data_used: 21160
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 94 ms_handle_reset con 0x5621e1c03000 session 0x5621e1947340
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc2a8000/0x0/0x4ffc00000, data 0xce15b7/0xd7e000, compress 0x0/0x0/0x0, omap 0x1100f, meta 0x2bbeff1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 95 ms_handle_reset con 0x5621e1456000 session 0x5621e1981500
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c02400 session 0x5621e196b6c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fc2a4000/0x0/0x4ffc00000, data 0xce41dd/0xd84000, compress 0x0/0x0/0x0, omap 0x11669, meta 0x2bbe997), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683060 data_alloc: 218103808 data_used: 29317
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 18653184 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 96 ms_handle_reset con 0x5621e1c03800 session 0x5621e1991c00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.875562668s of 17.118930817s, submitted: 126
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1457000 session 0x5621e0061880
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc2a3000/0x0/0x4ffc00000, data 0xce568d/0xd87000, compress 0x0/0x0/0x0, omap 0x11946, meta 0x2bbe6ba), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 97 ms_handle_reset con 0x5621e1bcec00 session 0x5621e0061500
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 18669568 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 18661376 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 99 ms_handle_reset con 0x5621e1456000 session 0x5621dfe81a40
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698132 data_alloc: 218103808 data_used: 29317
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 18628608 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fc298000/0x0/0x4ffc00000, data 0xce82c1/0xd90000, compress 0x0/0x0/0x0, omap 0x11fd8, meta 0x2bbe028), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 18604032 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 18546688 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1bd8800 session 0x5621e0a22380
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 100 ms_handle_reset con 0x5621e1c03800 session 0x5621e195b180
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705238 data_alloc: 218103808 data_used: 33413
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 18522112 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.190230370s of 10.334465027s, submitted: 49
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 18407424 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561800 session 0x5621df8edc00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 101 ms_handle_reset con 0x5621e3561400 session 0x5621e19bfc00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fc290000/0x0/0x4ffc00000, data 0xceaedd/0xd98000, compress 0x0/0x0/0x0, omap 0x12af4, meta 0x2bbd50c), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 102 ms_handle_reset con 0x5621e1456000 session 0x5621e1980000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 18178048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 103 ms_handle_reset con 0x5621e1bd8800 session 0x5621e199fc00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 104 ms_handle_reset con 0x5621e1c03800 session 0x5621e077a1c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717689 data_alloc: 218103808 data_used: 33413
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc289000/0x0/0x4ffc00000, data 0xcef535/0xda1000, compress 0x0/0x0/0x0, omap 0x1364f, meta 0x2bbc9b1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 717945 data_alloc: 218103808 data_used: 34639
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 18210816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621df8ec000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561000 session 0x5621e1980fc0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561800 session 0x5621e1946a80
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e038ddc0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621e196b340
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621e038dc00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 18087936 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.056212425s of 10.232484818s, submitted: 119
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e19be000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1456000 session 0x5621e05b8000
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 18096128 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf340
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e1c03800 session 0x5621df8ed880
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3561400 session 0x5621e1980c40
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 18112512 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726131 data_alloc: 218103808 data_used: 35205
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc284000/0x0/0x4ffc00000, data 0xcf0a6f/0xda6000, compress 0x0/0x0/0x0, omap 0x13c81, meta 0x2bbc37f), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17973248 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 ms_handle_reset con 0x5621e3560800 session 0x5621e070c380
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1456000 session 0x5621deefbdc0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1c03800 session 0x5621e0061340
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e1bd8800 session 0x5621e19ee1c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 17776640 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fc280000/0x0/0x4ffc00000, data 0xcf207e/0xdaa000, compress 0x0/0x0/0x0, omap 0x13ff0, meta 0x2bbc010), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 106 ms_handle_reset con 0x5621e3561400 session 0x5621deefbc00
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 107 ms_handle_reset con 0x5621e3560400 session 0x5621dfe81880
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1456000 session 0x5621e1946380
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737866 data_alloc: 218103808 data_used: 35783
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1bd8800 session 0x5621e1462540
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e1c03800 session 0x5621e0a23500
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3561400 session 0x5621e19db500
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.728271484s of 10.840334892s, submitted: 55
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 108 ms_handle_reset con 0x5621e3560000 session 0x5621e19be1c0
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 17727488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e1456000 session 0x5621e196ba40
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf62c5/0xdb3000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 17719296 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3561800 session 0x5621e070cc40
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 109 ms_handle_reset con 0x5621e3560c00 session 0x5621e0a23340
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0xcf6283/0xdb2000, compress 0x0/0x0/0x0, omap 0x14ca4, meta 0x2bbb35c), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 740491 data_alloc: 218103808 data_used: 36295
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 110 ms_handle_reset con 0x5621e1bd8800 session 0x5621dfe81500
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 17768448 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc27a000/0x0/0x4ffc00000, data 0xcf784b/0xdb2000, compress 0x0/0x0/0x0, omap 0x15133, meta 0x2bbaecd), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 17760256 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e196b180
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1c02400 session 0x5621e1981340
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1456000 session 0x5621df8ec380
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 111 ms_handle_reset con 0x5621e1457000 session 0x5621e077a700
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fc278000/0x0/0x4ffc00000, data 0xcf8ce4/0xdb3000, compress 0x0/0x0/0x0, omap 0x15543, meta 0x2bbaabd), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 17924096 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737991 data_alloc: 218103808 data_used: 32777
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 112 ms_handle_reset con 0x5621e1bd8800 session 0x5621ddecf180
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fc274000/0x0/0x4ffc00000, data 0xcfa2f2/0xdb5000, compress 0x0/0x0/0x0, omap 0x15a01, meta 0x2bba5ff), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fc272000/0x0/0x4ffc00000, data 0xcfb7be/0xdb8000, compress 0x0/0x0/0x0, omap 0x15c9a, meta 0x2bba366), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743972 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.406369209s of 13.619614601s, submitted: 155
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 17899520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 17891328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 17883136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 17743872 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config show' '{prefix=config show}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'perf dump' '{prefix=perf dump}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'perf schema' '{prefix=perf schema}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 17276928 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 17268736 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread fragmentation_score=0.000195 took=0.000113s
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 17260544 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 17252352 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 17244160 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 17244160 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 17244160 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 17235968 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 17227776 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 17219584 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 17211392 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 17203200 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 17195008 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 17186816 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 17178624 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 17170432 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17162240 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 17154048 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15066 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 17145856 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 17137664 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 17129472 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 17121280 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 17113088 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 17113088 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 17113088 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17104896 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6272 writes, 24K keys, 6272 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6272 writes, 1344 syncs, 4.67 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2050 writes, 5141 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s#012Interval WAL: 2050 writes, 951 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 17096704 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc ms_handle_reset ms_handle_reset con 0x5621dffbd400
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3703679480
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3703679480,v1:192.168.122.100:6801/3703679480]
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: mgrc handle_mgr_configure stats_period=5
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 16883712 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 16875520 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 16867328 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 16859136 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 16850944 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 16842752 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 16834560 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 16826368 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 16818176 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16809984 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 16908288 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 16891904 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config show' '{prefix=config show}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746746 data_alloc: 218103808 data_used: 36838
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16728064 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fc26f000/0x0/0x4ffc00000, data 0xcfcc6e/0xdbb000, compress 0x0/0x0/0x0, omap 0x1601f, meta 0x2bb9fe1), peers [0,1] op hist [])
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16703488 heap: 92160000 old mem: 2845415832 new mem: 2845415832
Jan 10 12:31:59 np0005580781 ceph-osd[87867]: do_command 'log dump' '{prefix=log dump}'
Jan 10 12:31:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 10 12:31:59 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640597616' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 10 12:31:59 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:31:59 np0005580781 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 10 12:31:59 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15070 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:32:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 10 12:32:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644995543' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 10 12:32:00 np0005580781 nova_compute[237049]: 2026-01-10 17:32:00.370 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:32:00 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:32:00 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 10 12:32:00 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061084163' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 10 12:32:00 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1136: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:32:00 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:32:01 np0005580781 nova_compute[237049]: 2026-01-10 17:32:01.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:32:01 np0005580781 nova_compute[237049]: 2026-01-10 17:32:01.346 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 10 12:32:01 np0005580781 nova_compute[237049]: 2026-01-10 17:32:01.347 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 10 12:32:01 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 10 12:32:01 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4206728935' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 10 12:32:01 np0005580781 nova_compute[237049]: 2026-01-10 17:32:01.492 237053 DEBUG nova.compute.manager [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 10 12:32:01 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15082 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 10 12:32:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 10 12:32:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573740819' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 10 12:32:02 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15086 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:32:02 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 10 12:32:02 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2399650282' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 10 12:32:02 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15090 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:32:02 np0005580781 ceph-mgr[75538]: log_channel(cluster) log [DBG] : pgmap v1137: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 10 12:32:03 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15094 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:32:03 np0005580781 nova_compute[237049]: 2026-01-10 17:32:03.346 237053 DEBUG oslo_service.periodic_task [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 10 12:32:03 np0005580781 nova_compute[237049]: 2026-01-10 17:32:03.378 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:32:03 np0005580781 nova_compute[237049]: 2026-01-10 17:32:03.380 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:32:03 np0005580781 nova_compute[237049]: 2026-01-10 17:32:03.380 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 10 12:32:03 np0005580781 nova_compute[237049]: 2026-01-10 17:32:03.381 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 10 12:32:03 np0005580781 nova_compute[237049]: 2026-01-10 17:32:03.382 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:32:03 np0005580781 ceph-mgr[75538]: log_channel(audit) log [DBG] : from='client.15096 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 10 12:32:03 np0005580781 ceph-mgr[75538]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:32:03 np0005580781 ceph-a4f60d8d-01ac-5e9c-b6a9-48dfc78031c4-mgr-compute-0-mkxlpr[75534]: 2026-01-10T17:32:03.704+0000 7fd5c778b640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 10 12:32:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 10 12:32:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341943870' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 10 12:32:03 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 10 12:32:03 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174244532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 10 12:32:03 np0005580781 nova_compute[237049]: 2026-01-10 17:32:03.971 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 10 12:32:04 np0005580781 nova_compute[237049]: 2026-01-10 17:32:04.155 237053 WARNING nova.virt.libvirt.driver [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 10 12:32:04 np0005580781 nova_compute[237049]: 2026-01-10 17:32:04.157 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4848MB free_disk=59.988249060697854GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 10 12:32:04 np0005580781 nova_compute[237049]: 2026-01-10 17:32:04.157 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 10 12:32:04 np0005580781 nova_compute[237049]: 2026-01-10 17:32:04.157 237053 DEBUG oslo_concurrency.lockutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 10 12:32:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 10 12:32:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265205213' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 10 12:32:04 np0005580781 nova_compute[237049]: 2026-01-10 17:32:04.265 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 10 12:32:04 np0005580781 nova_compute[237049]: 2026-01-10 17:32:04.265 237053 DEBUG nova.compute.resource_tracker [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 10 12:32:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 10 12:32:04 np0005580781 ceph-mon[75249]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562716068' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 10 12:32:04 np0005580781 nova_compute[237049]: 2026-01-10 17:32:04.284 237053 DEBUG oslo_concurrency.processutils [None req-383ebdf2-a633-4fa1-a78d-6addc8bd83c2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.386300087s of 10.565129280s, submitted: 38
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 461170 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb7f3/0x128000, compress 0x0/0x0/0x0, omap 0xb23d, meta 0x1a24dc3), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 469845 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.835360527s of 14.069359779s, submitted: 7
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09c000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471536 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1245184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 1228800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473949 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.012902260s of 11.086735725s, submitted: 4
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1163264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481182 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1130496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.026464462s of 12.046990395s, submitted: 8
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1114112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 488419 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490830 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.865109444s of 13.878160477s, submitted: 6
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 493243 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 495656 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498069 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.043815613s of 14.055473328s, submitted: 6
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500482 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 505306 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.805700302s of 12.849143982s, submitted: 6
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 510130 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512541 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959676743s of 10.976650238s, submitted: 6
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 778240 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 522187 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 524598 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 737280 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.874062538s of 15.986262321s, submitted: 10
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 531833 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 534244 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.020789146s of 13.038912773s, submitted: 8
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 536655 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 606208 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539068 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 541481 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.103586197s of 15.117080688s, submitted: 6
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543894 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546305 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 540672 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548718 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.009474754s of 12.021146774s, submitted: 6
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 555951 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558362 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.718189240s of 13.738556862s, submitted: 8
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 565595 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 245760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 73728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 335872 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 327680 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 319488 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 16.66 MB, 0.03 MB/s#012Interval WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000145 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 892928 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 ms_handle_reset con 0x55d5962f7000 session 0x55d596c1a700
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 ms_handle_reset con 0x55d597b79000 session 0x55d596c0e700
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 761856 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 753664 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 745472 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 737280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 729088 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 720896 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68485120 unmapped: 712704 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 4552 writes, 20K keys, 4552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4552 writes, 515 syncs, 8.84 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d5952838d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, i
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 568006 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fe09e000/0x0/0x4ffc00000, data 0xbe289/0x12e000, compress 0x0/0x0/0x0, omap 0xb57d, meta 0x1a24a83), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 987.875549316s of 987.892700195s, submitted: 8
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 548864 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 573166 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 8749056 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 71 ms_handle_reset con 0x55d598f3ec00 session 0x55d5975fc000
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 15982592 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fd892000/0x0/0x4ffc00000, data 0x8c0ead/0x936000, compress 0x0/0x0/0x0, omap 0xbe6e, meta 0x1a24192), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 15851520 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 24100864 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 72 ms_handle_reset con 0x55d598f3f400 session 0x55d598b91c00
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 691316 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fcc1e000/0x0/0x4ffc00000, data 0x1533a7f/0x15ac000, compress 0x0/0x0/0x0, omap 0xc4ca, meta 0x1a23b36), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fcc1e000/0x0/0x4ffc00000, data 0x1533a7f/0x15ac000, compress 0x0/0x0/0x0, omap 0xc4ca, meta 0x1a23b36), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 691316 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 24092672 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.579319000s of 11.783731461s, submitted: 42
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694088 data_alloc: 218103808 data_used: 0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc1b000/0x0/0x4ffc00000, data 0x1534f2f/0x15af000, compress 0x0/0x0/0x0, omap 0xc703, meta 0x1a238fd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 24068096 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.930355072s of 34.937496185s, submitted: 13
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29800 session 0x55d596c1a1c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 23683072 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29c00 session 0x55d598e47c00
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 74 ms_handle_reset con 0x55d596c29000 session 0x55d596cc2e00
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 75 ms_handle_reset con 0x55d596c28000 session 0x55d598e74700
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 23625728 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d596c29c00 session 0x55d5975fd180
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d596c29000 session 0x55d598e26700
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d598f3f400 session 0x55d596cc2540
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 76 ms_handle_reset con 0x55d599109800 session 0x55d5975fc000
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 22331392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 22364160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 76 heartbeat osd_stat(store_statfs(0x4fcc0f000/0x0/0x4ffc00000, data 0x1539540/0x15bc000, compress 0x0/0x0/0x0, omap 0xd5db, meta 0x1a22a25), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715390 data_alloc: 218103808 data_used: 19
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 77 ms_handle_reset con 0x55d596c28000 session 0x55d596cc2a80
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 22331392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 78 ms_handle_reset con 0x55d596c29000 session 0x55d597b17180
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 22298624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 22274048 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-mon[75249]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 79 ms_handle_reset con 0x55d596c5d000 session 0x55d597b16c40
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 22249472 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcc07000/0x0/0x4ffc00000, data 0x153d348/0x15c3000, compress 0x0/0x0/0x0, omap 0xe026, meta 0x1a21fda), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 81 ms_handle_reset con 0x55d596c29800 session 0x55d5975fd180
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 22241280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 733219 data_alloc: 218103808 data_used: 8141
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 82 ms_handle_reset con 0x55d596c29c00 session 0x55d596edea80
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 22110208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447594643s of 10.715058327s, submitted: 154
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 83 ms_handle_reset con 0x55d596c28000 session 0x55d597b16fc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 83 ms_handle_reset con 0x55d596c29000 session 0x55d597b176c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 22167552 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 84 ms_handle_reset con 0x55d596c29800 session 0x55d597b16540
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 22159360 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 85 ms_handle_reset con 0x55d596c5d000 session 0x55d5988be540
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 22077440 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fc3ed000/0x0/0x4ffc00000, data 0x1d46de1/0x1ddd000, compress 0x0/0x0/0x0, omap 0xf073, meta 0x1a20f8d), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 21741568 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962711 data_alloc: 218103808 data_used: 8141
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 20537344 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 86 ms_handle_reset con 0x55d598f3ec00 session 0x55d596d128c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 20496384 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 87 ms_handle_reset con 0x55d596c28000 session 0x55d597e6a000
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 20545536 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x15499ac/0x15e2000, compress 0x0/0x0/0x0, omap 0xf5cf, meta 0x2bc0a31), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 88 ms_handle_reset con 0x55d596c29000 session 0x55d598e47880
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 88 ms_handle_reset con 0x55d596c29800 session 0x55d596edf6c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 20799488 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 89 ms_handle_reset con 0x55d596c5d000 session 0x55d597b2d180
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 20652032 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 90 ms_handle_reset con 0x55d598f3f400 session 0x55d597b17dc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 773529 data_alloc: 218103808 data_used: 8141
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 20602880 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 91 ms_handle_reset con 0x55d596c28000 session 0x55d597b17500
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 20578304 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.893272400s of 10.285517693s, submitted: 131
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 92 ms_handle_reset con 0x55d598f3f400 session 0x55d596d13880
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 19365888 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 93 ms_handle_reset con 0x55d5993b4000 session 0x55d597e6b500
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 93 ms_handle_reset con 0x55d5993b5000 session 0x55d597b161c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x1550b01/0x15ed000, compress 0x0/0x0/0x0, omap 0x10a25, meta 0x2bbf5db), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 19259392 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1551fcd/0x15f0000, compress 0x0/0x0/0x0, omap 0x10c5d, meta 0x2bbf3a3), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784952 data_alloc: 218103808 data_used: 8141
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1551fcd/0x15f0000, compress 0x0/0x0/0x0, omap 0x10c5d, meta 0x2bbf3a3), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 94 ms_handle_reset con 0x55d5993b5400 session 0x55d596ebc380
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 19243008 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 95 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15535da/0x15f4000, compress 0x0/0x0/0x0, omap 0x11026, meta 0x2bbefda), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d598f3f400 session 0x55d597bd8c40
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d596c28000 session 0x55d596ebce00
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d5993b4000 session 0x55d597e6b880
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 19185664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794599 data_alloc: 218103808 data_used: 8141
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 19185664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1554c48/0x15f8000, compress 0x0/0x0/0x0, omap 0x11347, meta 0x2bbecb9), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1554c48/0x15f8000, compress 0x0/0x0/0x0, omap 0x11347, meta 0x2bbecb9), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794599 data_alloc: 218103808 data_used: 8141
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 96 ms_handle_reset con 0x55d5993b5000 session 0x55d598e476c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 19169280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.301731110s of 17.467674255s, submitted: 110
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 97 ms_handle_reset con 0x55d59774b400 session 0x55d596cc2c40
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x15560f8/0x15fb000, compress 0x0/0x0/0x0, omap 0x11667, meta 0x2bbe999), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796587 data_alloc: 218103808 data_used: 8141
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 97 ms_handle_reset con 0x55d596c28000 session 0x55d596edf180
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 19283968 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 97 handle_osd_map epochs [98,99], i have 97, src has [1,99]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 99 ms_handle_reset con 0x55d598f3f400 session 0x55d596d13dc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 99 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x157cce6/0x1625000, compress 0x0/0x0/0x0, omap 0x118a9, meta 0x2bbe757), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 18915328 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805611 data_alloc: 218103808 data_used: 10701
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 99 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x157cce6/0x1625000, compress 0x0/0x0/0x0, omap 0x118a9, meta 0x2bbe757), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 18882560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 18587648 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 100 ms_handle_reset con 0x55d5993b7800 session 0x55d597e6b6c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 100 ms_handle_reset con 0x55d598fb8400 session 0x55d596ec8540
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 18579456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 101 ms_handle_reset con 0x55d5993b7c00 session 0x55d596c1b500
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 101 ms_handle_reset con 0x55d596c28000 session 0x55d598e8d6c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.204052925s of 10.307401657s, submitted: 49
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 18505728 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812037 data_alloc: 218103808 data_used: 10802
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fb9ff000/0x0/0x4ffc00000, data 0x157f8d4/0x162b000, compress 0x0/0x0/0x0, omap 0x11e17, meta 0x2bbe1e9), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 102 ms_handle_reset con 0x55d598f3f400 session 0x55d598e8da40
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 18489344 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 103 ms_handle_reset con 0x55d598fb8400 session 0x55d598e47a40
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 18472960 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fb9fa000/0x0/0x4ffc00000, data 0x1580ef5/0x162e000, compress 0x0/0x0/0x0, omap 0x120f9, meta 0x2bbdf07), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d5993b7800 session 0x55d597bd8e00
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fb9f4000/0x0/0x4ffc00000, data 0x1583b69/0x1636000, compress 0x0/0x0/0x0, omap 0x12577, meta 0x2bbda89), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825379 data_alloc: 218103808 data_used: 10802
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fb9f4000/0x0/0x4ffc00000, data 0x1583b69/0x1636000, compress 0x0/0x0/0x0, omap 0x12577, meta 0x2bbda89), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 18399232 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d599109400 session 0x55d598e741c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d596c28000 session 0x55d597bd8c40
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 104 ms_handle_reset con 0x55d597b78800 session 0x55d597b17500
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598f3f400 session 0x55d597e6a000
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 18096128 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8400 session 0x55d598e74fc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8c00 session 0x55d598eb3180
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d596c28000 session 0x55d596ec8700
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d597b78800 session 0x55d598eb2540
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598f3f400 session 0x55d597b17dc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d598fb8400 session 0x55d598eb3340
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 18317312 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829661 data_alloc: 218103808 data_used: 10817
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 18317312 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.297220230s of 11.404709816s, submitted: 74
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d599109000 session 0x55d597e6ac40
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fb9f1000/0x0/0x4ffc00000, data 0x1585084/0x163b000, compress 0x0/0x0/0x0, omap 0x12d33, meta 0x2bbd2cd), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830678 data_alloc: 218103808 data_used: 10833
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 ms_handle_reset con 0x55d597b78800 session 0x55d596d121c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 18161664 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d598f3f400 session 0x55d5975fddc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d598fb8400 session 0x55d596d13340
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774ac00 session 0x55d598b916c0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774a400 session 0x55d598e75dc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 17752064 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 106 ms_handle_reset con 0x55d59774ac00 session 0x55d598fdc380
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 17817600 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d597b78800 session 0x55d596c1aa80
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fb9e7000/0x0/0x4ffc00000, data 0x1588082/0x1643000, compress 0x0/0x0/0x0, omap 0x132c3, meta 0x2bbcd3d), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76570624 unmapped: 17809408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d598f3f400 session 0x55d598e8ce00
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d598fb8400 session 0x55d598e75c00
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d59774b400 session 0x55d598ff4a80
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 17514496 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 108 ms_handle_reset con 0x55d59774ac00 session 0x55d596ebc000
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843426 data_alloc: 218103808 data_used: 14965
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 17489920 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.546408653s of 10.006135941s, submitted: 114
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 109 ms_handle_reset con 0x55d597b78800 session 0x55d598e8d340
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 16400384 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 109 ms_handle_reset con 0x55d596c28000 session 0x55d596d12a80
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fb9e4000/0x0/0x4ffc00000, data 0x158a889/0x1646000, compress 0x0/0x0/0x0, omap 0x13de3, meta 0x2bbc21d), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 16424960 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 110 ms_handle_reset con 0x55d598f3f400 session 0x55d597b2d500
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 16367616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 16367616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850847 data_alloc: 218103808 data_used: 10802
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fb9dd000/0x0/0x4ffc00000, data 0x158d35f/0x164a000, compress 0x0/0x0/0x0, omap 0x1463c, meta 0x2bbb9c4), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d5993b4000 session 0x55d596cc3340
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d5993b5000 session 0x55d597b2c540
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 16547840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d596c28000 session 0x55d597bd9880
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 16769024 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 111 ms_handle_reset con 0x55d59774ac00 session 0x55d5988bfdc0
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 16769024 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 112 ms_handle_reset con 0x55d597b78800 session 0x55d5988bea80
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fba02000/0x0/0x4ffc00000, data 0x156a94e/0x1628000, compress 0x0/0x0/0x0, omap 0x14a07, meta 0x2bbb5f9), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853791 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fb9fd000/0x0/0x4ffc00000, data 0x156be1a/0x162b000, compress 0x0/0x0/0x0, omap 0x14c5d, meta 0x2bbb3a3), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 16850944 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.037557602s of 14.196849823s, submitted: 107
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 16801792 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 16728064 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'config show' '{prefix=config show}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 16334848 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 16334848 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'log dump' '{prefix=log dump}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 16318464 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'perf dump' '{prefix=perf dump}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'perf schema' '{prefix=perf schema}'
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread fragmentation_score=0.000149 took=0.000078s
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 16261120 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 16252928 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 16244736 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 16236544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 16228352 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855957 data_alloc: 218103808 data_used: 12714
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x156d2ca/0x162e000, compress 0x0/0x0/0x0, omap 0x14f67, meta 0x2bbb099), peers [0,2] op hist [])
Jan 10 12:32:04 np0005580781 ceph-osd[86809]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 16220160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
